text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Investigation of Multi-Plane Scheme for Compensation of Fringe Effect of Electrical Resistance Tomography Sensor Conventional electrical resistance tomography (ERT) sensors suffer from the fringe effect, i.e., severe distortion of the electric field on both ends of the measurement electrodes, leading to a 3D sensing region for a 2D sensor. As a result, the objects outside an ERT sensor plane affect the sensing and hence image, i.e., deteriorating the image quality. To address this issue, a multiple-plane ERT sensor scheme is proposed in this paper. With this scheme, auxiliary sensor planes are used to provide references for the fringe effect of the measurement plane, for compensation by subtracting the weighed influence of the fringe effect. Simulation results show that the proposed scheme, either three-plane or two-plane sensor, can compensate for the fringe effect induced by objects outside the measurement plane with a variety of axial object distributions, i.e., several non-conductive bars or conductive bars placed at different cross-sectional and axial positions inside the sensor. Experiments were carried out. Images obtained with single-plane and multiple-plane ERT sensors are compared, and the proposed compensation scheme has been hence verified. Introduction Electrical resistance tomography (ERT) is an imaging technique used to visualize and measure the distribution of object(s) or material with different electrical properties within an imaging plane or volume of interest using a multi-electrode sensor. In the conventional ERT, the current-injection and voltage-measurement strategy is adopted. To interrogate an imaging area of interest, an AC current is applied to one pair of electrodes sequentially while other electrodes are floating, and the inter-electrode resistance is measured. This process continues until a complete set of data is taken. Finally, an image of object or material distribution is reconstructed according to the measurement data using an appropriate algorithm. ERT has found many applications, from biomedical imaging to multiphase flow measurement in industrial processes, featured by providing 2D or even 3D images. Recently, research groups in the NC State University and University of Eastern Finland have obtained promising results for new applications of ERT, such as concrete damage detection and measurement of unsaturated moisture Image Reconstruction A variety of reconstruction algorithms have been reported in the literature [14]. Among them, linear back-projection (LBP) and the Landweber iteration are the most popular. The Landweber iteration is an iterative method and can produce quantitative images. It is used in this paper for image reconstruction. To implement the Landweber iteration, sensitivity maps of an ERT sensor and normalization of measured data are needed. A 2D sensitivity map is composed by the sensitivity of electrode pairs i and j to the conductivity change of the pixel at position (x, y) with an area of P(x, y) [15]: where E i (x, y) and E j (x, y) are the electric field strength at (x, y) when the ith and jth electrode pairs are injected with currents I i and I j , respectively for excitation in turn. For ERT, the measured voltage differences are normalized by calculating their relative changes with respect to the reference voltage differences, which are obtained when an ERT sensor is filled with a conductive background medium. It can be expressed as [15,16]: where λ(i, j) is the normalized change in voltage difference for injection electrode pair i and measurement electrode pair j, and V m (i, j) and V r (i, j) are the measured and reference voltage difference for injection electrode pair i and measurement electrode pair j, respectively. Landweber iteration method [17] is derived from the steepest gradient descent method in the optimization theory. To improve the convergence rate, the original Landweber iteration is modified to be [14]: where g k and g k+1 are the normalized permittivity vector for kth and (k+ 1)th iteration, α k is the gain or relaxation factor, which is used to determine the convergence rate. P is a projection operator: In the iteration process, the relaxation factor α k can be updated in each iteration as discussed by Liu et al. [18]. To start the Landweber iteration process, the initial permittivity distribution g 0 is normally reconstructed by LBP. Two criteria are employed to evaluate the performance of reconstruction algorithms, which are relative image error and correlation coefficient between the true image and the reconstructed image. Their definitions were given in [12]. Analysis of Fringe Field Distribution Many researchers considered the fringe effect between parallel electrode plates. Metodiey et al. analyzed the fringe fields between two finite parallel flat plates [19]. For two flat plates charged at +V 0 and -V 0 , the electric field between them can be expressed by an implicit mapping [19]. The generated radial electric field Ey and longitudinal electric field Ex are defined as: where d is the distance between the plates. The x direction stands for the longitudinal direction along the plates, and the y direction is the direction perpendicular to the plates. Therefore, the radial and longitudinal electric fields can be obtained in terms of u and v based on the implicit mapping. An analytical solution to the static electric field between two parallel plates was derived and shown in Figure 1a [19]. It is observed that the fringe electric fields between the parallel plates are distorted and decay in magnitude with the distance from the edge of the plates. According to a similar governing law, this kind of fringe effect also exists between resistive electrodes. Figure 1b shows the current density distribution between a pair of adjacent electrodes in an ERT sensor, obtained by finite element simulation. The further away from the electrode edge, the more decayed and distorted the fringe electric fields would be. [19]; (b) current density distribution between a pair of adjacent electrodes. According to Equation (1), the sensitivity distribution is subject to the electric field distribution inside the sensing domain of an ERT sensor. Thus, the fringe effect at different axial positions can be observed in the sensitivity distributions in the corresponding plane above or below the measurement plane. The sensitivity maps of a conventional single-plane ERT sensor are generated to show the fringe effect at different axial positions between the adjacent or opposite electrode pair. In this case, the planes above the top end of electrodes by 0.5, 3, and 5 cm are selected for comparison. Figure 2 shows the sensitivity distributions between the adjacent or opposite electrode pairs. Compared to the sensitivity distribution in the measurement plane, it is obvious that the sensitivity decreases when the plane is far away from the measurement plane. However, the sensitivity distribution between each electrode pair does not vary obviously when the selected plane is just above the measurement plane by 0.5 cm. Although the fringe sensitivity is not as large as in the measurement plane, the objects outside the measurement plane still affect the images if they are sufficiently large. To quantify the similarity between the sensitivity distributions shown in Figure 2, one of the evaluation criteria for reconstructed images can be adopted, which is the correlation coefficient between sensitivity distributions in the measurement electrode plane and other selected axial plane. It is defined as: Figure 1. Static electric field or electrical current density distribution between two capacitive or resistive electrodes. (a) Magnitude of static electric field between parallel electrode plates [19]; (b) current density distribution between a pair of adjacent electrodes. According to Equation (1), the sensitivity distribution is subject to the electric field distribution inside the sensing domain of an ERT sensor. Thus, the fringe effect at different axial positions can be observed in the sensitivity distributions in the corresponding plane above or below the measurement plane. The sensitivity maps of a conventional single-plane ERT sensor are generated to show the fringe effect at different axial positions between the adjacent or opposite electrode pair. In this case, the planes above the top end of electrodes by 0.5, 3, and 5 cm are selected for comparison. Figure 2 shows the sensitivity distributions between the adjacent or opposite electrode pairs. Compared to the sensitivity distribution in the measurement plane, it is obvious that the sensitivity decreases when the plane is far away from the measurement plane. However, the sensitivity distribution between each electrode pair does not vary obviously when the selected plane is just above the measurement plane by 0.5 cm. Although the fringe sensitivity is not as large as in the measurement plane, the objects outside the measurement plane still affect the images if they are sufficiently large. [19]; (b) current density distribution between a pair of adjacent electrodes. According to Equation (1), the sensitivity distribution is subject to the electric field distribution inside the sensing domain of an ERT sensor. Thus, the fringe effect at different axial positions can be observed in the sensitivity distributions in the corresponding plane above or below the measurement plane. The sensitivity maps of a conventional single-plane ERT sensor are generated to show the fringe effect at different axial positions between the adjacent or opposite electrode pair. In this case, the planes above the top end of electrodes by 0.5, 3, and 5 cm are selected for comparison. Figure 2 shows the sensitivity distributions between the adjacent or opposite electrode pairs. Compared to the sensitivity distribution in the measurement plane, it is obvious that the sensitivity decreases when the plane is far away from the measurement plane. However, the sensitivity distribution between each electrode pair does not vary obviously when the selected plane is just above the measurement plane by 0.5 cm. Although the fringe sensitivity is not as large as in the measurement plane, the objects outside the measurement plane still affect the images if they are sufficiently large. To quantify the similarity between the sensitivity distributions shown in Figure 2, one of the evaluation criteria for reconstructed images can be adopted, which is the correlation coefficient between sensitivity distributions in the measurement electrode plane and other selected axial plane. It is defined as: To quantify the similarity between the sensitivity distributions shown in Figure 2, one of the evaluation criteria for reconstructed images can be adopted, which is the correlation coefficient between sensitivity distributions in the measurement electrode plane and other selected axial plane. It is defined as: where S is the sensitivity distribution in the measurement electrode plane between a specified electrode pair,Ŝ is the sensitivity distribution in the selected plane above or below the measurement plane, and S andŜ are the mean values of S andŜ, respectively. Figure 3 shows a decreased trend of the correlation coefficient, indicating that the similarity between sensitivity distributions becomes smaller when the selected plane is moving away from the measurement plane. However, it is undesirable for fringe effect compensation because the auxiliary electrode plane should have similar responses to the fringe effect (both in magnitude and pattern regarding the sensitivity distributions) as in the measurement plane to provide references for compensation. Therefore, the auxiliary electrode plane should be as close as possible to the measurement electrode plane. where is the sensitivity distribution in the measurement electrode plane between a specified electrode pair, ̂ is the sensitivity distribution in the selected plane above or below the measurement plane, and ̅ and ̂̅ are the mean values of and ̂, respectively. Figure 3 shows a decreased trend of the correlation coefficient, indicating that the similarity between sensitivity distributions becomes smaller when the selected plane is moving away from the measurement plane. However, it is undesirable for fringe effect compensation because the auxiliary electrode plane should have similar responses to the fringe effect (both in magnitude and pattern regarding the sensitivity distributions) as in the measurement plane to provide references for compensation. Therefore, the auxiliary electrode plane should be as close as possible to the measurement electrode plane. Compensation Scheme Based on the above analysis, it is possible to attain the fringe effect induced by the objects outside the measurement plane. We propose the use of auxiliary electrode planes in an ERT sensor for compensation purpose, which is a multi-plane sensor scheme. With the multi-plane sensor, the objects outside the sensor can be sensed by each individual plane simultaneously. If electrode arrangements in the measurement plane and the auxiliary planes are same, it can be assumed that there is a proportional relationship between the sensed fringe effect by the measurement plane and an auxiliary plane when those electrode planes are sufficiently close to each other as discussed above, which can be expressed as: 12 k  = (8) where 1  and 2  are the sensed fringe effect by the measurement plane and an auxiliary plane, respectively. k is the proportional factor, which is application-dependent. Based on this assumption, the compensation scheme can be achieved by subtracting the weighed measurement data in the auxiliary plane from that in the measurement plane, which is described in detail in Section 4.1. To compensate for the fringe effect, three-plane and two-plane ERT sensor schemes are investigated in the following. Compensation Scheme Based on the above analysis, it is possible to attain the fringe effect induced by the objects outside the measurement plane. We propose the use of auxiliary electrode planes in an ERT sensor for compensation purpose, which is a multi-plane sensor scheme. With the multi-plane sensor, the objects outside the sensor can be sensed by each individual plane simultaneously. If electrode arrangements in the measurement plane and the auxiliary planes are same, it can be assumed that there is a proportional relationship between the sensed fringe effect by the measurement plane and an auxiliary plane when those electrode planes are sufficiently close to each other as discussed above, which can be expressed as: where λ 1 and λ 2 are the sensed fringe effect by the measurement plane and an auxiliary plane, respectively. k is the proportional factor, which is application-dependent. Based on this assumption, the compensation scheme can be achieved by subtracting the weighed measurement data in the auxiliary plane from that in the measurement plane, which is described in detail in Section 4.1. To compensate for the fringe effect, three-plane and two-plane ERT sensor schemes are investigated in the following. Three-Plane ERT Sensor Scheme A three-plane ERT sensor scheme was proposed by Sun and Yang to compensate for fringe effect [8]. The three electrode planes of the ERT sensor can be denoted as the top, middle, and bottom planes. The middle plane is mainly for image reconstruction, while the other two planes are auxiliary planes for the compensation of fringe effect induced by objects outside the sensor plane. According to a 3D model for ERT [8,20], objects inside the ERT sensor can be sensed by all the three electrode planes if the same excitation signal and measurement strategy applied to all the three electrode planes simultaneously. Similarly, objects above or below the middle plane are sensed by the middle plane and top or bottom plane at the same time. As these three electrode planes are the same in geometry and structure, their responses to the same object inside their sensing ranges are correlated to each other. Then, the fringe effect induced by the objects above the middle plane may be compensated with the measurements in the top plane, while the fringe effect induced by objects below the middle plane may be compensated with the measurements in the top and bottom planes. The compensation can be made by subtracting the weighed measurement data before normalization in the top and bottom planes from the measurement data before normalization in the middle plane, as given by where V ac is the measured vector of potential differences after compensation, V m , V t , and V b are the measured vectors of potential differences in the middle, top, and bottom electrode planes, respectively, and WF is the weighting factor, which is a small positive scalar and initially determined based on trial-and-error. A three-plane ERT sensor is designed to verify the proposed method. The inner diameter of the sensor is 10 cm, and the number of the electrode in each plane is 16. The gap between adjacent planes is 5 mm. Unlike the driven guards in ERT sensors, measurements are also taken from the top and bottom planes in the same way as in the middle plane. With the adjacent measurement strategy, a pair of adjacent electrodes in the middle plane is injected a current signal, while the two pairs of adjacent electrodes in the top and bottom planes above and below this electrode pair are injected with almost the same current signal, respectively. Note that the electrodes in the same column are injected currents of the same polarity. Potential differences are measured in each electrode plane separately according to the measurement strategy. This process is repeated until all independent measurements in each electrode plane are taken. The number of independent measurements is determined by N(N − 3)/2, where N is the number of electrodes. In the simulation, each pair of differential currents has a peak-to-peak magnitude of 10 mA and exactly out of phase. The frequency of the injected AC current is 10 kHz. The measurement data after compensation are normalized using the reference data acquired in the middle plane with conductive background medium filling the sensor. Compensation is only effective in reducing the fringe effect induced by objects outside the ERT measurement sensor plane. For the axially non-uniform distribution with a single object inside the sensor, the fringe effect can be reduced by direct scaling [8]. Additionally, it was shown that the reconstruction algorithm a linear forward projection, e.g., Landweber iteration, tends to over-estimate the size of a non-conductive object being imaged when the distribution is axially uniform and this can be overcome with a forward operator based on a finite element method (FEM), which is computationally intensive [17]. For compensation purpose, it was found that the multi-plane sensor scheme is also effective to reduce this kind of over-estimation by linear forward projection (e.g., Landweber iteration) with less time consumption when the distribution is axially uniform. Therefore, this paper will examine the axially uniform distribution with a non-conductive object or axially non-uniform distribution with multiple non-conductive objects inside or outside the measurement sensor plane. Meanwhile, the same phantoms with conductive materials are also used in this case for comparison. Note that the conductivity of conductive materials is higher than that of background medium. Initial simulation was carried out to investigate the proposed three-plane sensor scheme. Three different setups were tested to evaluate the effectiveness of the three-plane ERT sensor in reducing the fringe effect and over-estimation by the Landweber iteration. The cross-sectional views of the normalized and true distributions for the three setups are shown in Figure 4. Initial simulation was carried out to investigate the proposed three-plane sensor scheme. Three different setups were tested to evaluate the effectiveness of the three-plane ERT sensor in reducing the fringe effect and over-estimation by the Landweber iteration. The cross-sectional views of the normalized and true distributions for the three setups are shown in Figure 4. In Figure 4c, three rods are distributed in different axial and cross-sectional positions, as shown in Figure 5a; but only the one marked with red color in Figure 5b is inside the middle plane for imaging with other two outside the middle plane. These three rods are of the same diameter and almost the same length. In all these setups, the rods are non-conductive with saline of conductivity 0.02 S/m as the background medium. The measured data in different planes are different but are correlated with each other. The fringe effect induced by objects outside the middle plane can be extracted from the measurements in the top and bottom planes. For the distribution in Figure 4a,b, the simulated potential differences acquired in the three sensor planes are similar to each other because the distribution is almost the same for all the three planes. Using the normalized data after compensation, images are reconstructed for the above setups by the projected Landweber iteration as shown in Figure 6a-c. For comparison, the reconstruction results with a single-plane ERT sensor are shown in Figure 6d-f. Note that the single-plane ERT sensor has only the middle electrode plane in the three-plane ERT sensor with other geometry parameters unchanged. The weighting factors are 0.166, 0.101, 0.114 for the setups in Figure 6a-c, respectively, during the reconstruction. For comparison of quantity, the relative image errors and correlation coefficients of Figure 6a-f regarding the respective true distribution are listed in Table 1, as well as the settings of relaxation factor and the number of iterations for optimized reconstruction using the projected Landweber iteration. Note that the relaxation factor is updated in each iteration according to the linear search method proposed by Liu et al. [14]. Figure 6 and Table 1 also show that In Figure 4c, three rods are distributed in different axial and cross-sectional positions, as shown in Figure 5a; but only the one marked with red color in Figure 5b is inside the middle plane for imaging with other two outside the middle plane. These three rods are of the same diameter and almost the same length. In all these setups, the rods are non-conductive with saline of conductivity 0.02 S/m as the background medium. The measured data in different planes are different but are correlated with each other. The fringe effect induced by objects outside the middle plane can be extracted from the measurements in the top and bottom planes. For the distribution in Figure 4a,b, the simulated potential differences acquired in the three sensor planes are similar to each other because the distribution is almost the same for all the three planes. Initial simulation was carried out to investigate the proposed three-plane sensor scheme. Three different setups were tested to evaluate the effectiveness of the three-plane ERT sensor in reducing the fringe effect and over-estimation by the Landweber iteration. The cross-sectional views of the normalized and true distributions for the three setups are shown in Figure 4. In Figure 4c, three rods are distributed in different axial and cross-sectional positions, as shown in Figure 5a; but only the one marked with red color in Figure 5b is inside the middle plane for imaging with other two outside the middle plane. These three rods are of the same diameter and almost the same length. In all these setups, the rods are non-conductive with saline of conductivity 0.02 S/m as the background medium. The measured data in different planes are different but are correlated with each other. The fringe effect induced by objects outside the middle plane can be extracted from the measurements in the top and bottom planes. For the distribution in Figure 4a,b, the simulated potential differences acquired in the three sensor planes are similar to each other because the distribution is almost the same for all the three planes. Using the normalized data after compensation, images are reconstructed for the above setups by the projected Landweber iteration as shown in Figure 6a Table 1, as well as the settings of relaxation factor and the number of iterations for optimized reconstruction using the projected Landweber iteration. Note that the relaxation factor is updated in each iteration according to the linear search method proposed by Liu et al. [14]. Figure 6 and Table 1 also show that Table 1, as well as the settings of relaxation factor and the number of iterations for optimized reconstruction using the projected Landweber iteration. Note that the relaxation factor is updated in each iteration according to the linear search method proposed by Liu et al. [14]. Figure 6 and Table 1 also show that the over-estimation by the Landweber iteration and the fringe effect induced by objects outside the measurement plane can be substantially reduced with the three-plane ERT sensor scheme, especially in Figure 6c compared to Figure 6f. Note that the images of the two objects outside the middle sensor plane by the single-plane ERT sensor in Figure 6f are not very prominent with only three iterations. The grey level would be higher with more iterations, i.e., more severe artefacts in the reconstructed image. The iteration will stop when the image error in the current iteration becomes larger than the one in the previous iteration. the over-estimation by the Landweber iteration and the fringe effect induced by objects outside the measurement plane can be substantially reduced with the three-plane ERT sensor scheme, especially in Figure 6c compared to Figure 6f. Note that the images of the two objects outside the middle sensor plane by the single-plane ERT sensor in Figure 6f are not very prominent with only three iterations. The grey level would be higher with more iterations, i.e., more severe artefacts in the reconstructed image. The iteration will stop when the image error in the current iteration becomes larger than the one in the previous iteration. To further investigate the effectiveness of the three-plane sensor scheme, more simulation was carried out for a variety of scenarios with varied object size, arrangement, and conductivity as well as sensor geometry. Note that the objective is to resolve the issue that objects outside the measurement or middle electrode plane affect the reconstructed images because of the fringe effect. As shown in Figure 7a-c, three distributions were simulated, i.e., single rod, two rods, and three rods. The influence of the length and width of the rods is evaluated for all the selected distributions. The gap between adjacent electrode planes was also varied so that the optimal gap can be determined. Among these three distributions, the single rod or two rods are outside the middle electrode plane. In Figure 7c, the axial distribution of three rods is similar to that in Figure 5. However, the length of rod inside the measured plane is increased to be the same as the length of ERT sensor. The other two rods outside the measured plane has the same length. To illustrate the changes in fringe effect with the changes in conductivity contrast, two modifications were made in simulation for comparison purpose: (1) Conductive objects (σ = 1 S/m) were employed; (2) the conductivity of background medium was changed to be 0.2 S/m. In the simulation, the rod length varies from 3 to 24.5 cm while half of the length of the sensor wall is 25 cm. For each specified length, two different rod diameters (i.e., 0.75 cm and 1.5 cm) are simulated for comparison. In Figure 8, the distributions in all cases are reconstructed using the To further investigate the effectiveness of the three-plane sensor scheme, more simulation was carried out for a variety of scenarios with varied object size, arrangement, and conductivity as well as sensor geometry. Note that the objective is to resolve the issue that objects outside the measurement or middle electrode plane affect the reconstructed images because of the fringe effect. As shown in Figure 7a-c, three distributions were simulated, i.e., single rod, two rods, and three rods. The influence of the length and width of the rods is evaluated for all the selected distributions. The gap between adjacent electrode planes was also varied so that the optimal gap can be determined. Among these three distributions, the single rod or two rods are outside the middle electrode plane. In Figure 7c, the axial distribution of three rods is similar to that in Figure 5. However, the length of rod inside the measured plane is increased to be the same as the length of ERT sensor. The other two rods outside the measured plane has the same length. To illustrate the changes in fringe effect with the changes in conductivity contrast, two modifications were made in simulation for comparison purpose: (1) Conductive objects (σ = 1 S/m) were employed; (2) the conductivity of background medium was changed to be 0.2 S/m. Landweber iteration. With the single-plane ERT sensor, the reconstructed images are consistent with the previous analysis. For the rods outside the sensor plane, they affect the reconstructed images due to the fringe effect. The increase in the rod length and diameter would make the rods more prominent in the reconstructed images, i.e., more severe fringe effect. With the proposed three-plane sensor scheme, the fringe effect is reduced significantly. Good results are obtained when the gap between adjacent electrode planes is 0.5 cm, as discussed in Section 2. In the simulation, the rod length varies from 3 to 24.5 cm while half of the length of the sensor wall is 25 cm. For each specified length, two different rod diameters (i.e., 0.75 cm and 1.5 cm) are simulated for comparison. In Figure 8, the distributions in all cases are reconstructed using the Landweber iteration. With the single-plane ERT sensor, the reconstructed images are consistent with the previous analysis. For the rods outside the sensor plane, they affect the reconstructed images due to the fringe effect. The increase in the rod length and diameter would make the rods more prominent in the reconstructed images, i.e., more severe fringe effect. With the proposed three-plane sensor scheme, the fringe effect is reduced significantly. Good results are obtained when the gap between adjacent electrode planes is 0.5 cm, as discussed in Section 2. Landweber iteration. With the single-plane ERT sensor, the reconstructed images are consistent with the previous analysis. For the rods outside the sensor plane, they affect the reconstructed images due to the fringe effect. The increase in the rod length and diameter would make the rods more prominent in the reconstructed images, i.e., more severe fringe effect. With the proposed three-plane sensor scheme, the fringe effect is reduced significantly. Good results are obtained when the gap between adjacent electrode planes is 0.5 cm, as discussed in Section 2. (a) single rod (b)two rods (c) three rods As mentioned previously, the various distances of objects away from the end of the measurement electrode plane suffer from different fringe electric fields. To validate the effectiveness of the three-plane sensor scheme, the distance between the bottom of a rod and the top end of the middle electrode plane is increased from 0.5 cm to 3 cm. In Figure 8, the reconstructed images show that the fringe effect is also reduced with the three-plane sensor scheme in this case. For two rods, similar phenomenon is observed in the reconstructed images in Figure 9. The fringe effect induced by the two rods outside the measurement electrode plane is significantly reduced. However, the rods are weakly visible when the length of conductive rods is 3 cm with 3 cm gap between the measurement plane and compensation plane because the compensation of fringe effect is decreased due to the compensation plane was moved away from the measurement plane, which has been validated in the previous section. In Figure 9, although the fringe effect can be reduced when the gap between the electrode plane is increased from 0.5 cm to 3 cm, artefacts can still be observed in some cases. Compared with the previous simulation of three objects, the length of the rods outside the middle plane is changed. Meanwhile, rods of 1.5 cm in diameter are used in this case. Similar set up about the gap between the edge of the measured electrode and the bottom of rods was employed for the rods outside the measured plane. As shown in Figure 10, over-estimation of the rod inside the measurement plane and the fringe effect induced by the outside rods cannot be reduced when the gap between adjacent electrode planes is 3 cm. For conductive rods, artefacts are obvious in the reconstructed images. The fringe effect induced by the rod with longer length cannot be compensated when the gap of the adjacent plane is 3 cm. This phenomenon also proved that the increase in the gap between the measurement plane and compensation plane will lead to the decrease in efficiency of the proposed three-plane sensor scheme. The reconstructed images represent the local optimum for the corresponding distributions with the single-plane or three-plane ERT sensors. This indicates that the accuracy is improved with the three-plane ERT sensor, compared with the single-plane ERT sensor for the specified object distributions. A smaller gap between the adjacent electrode planes is more suitable for the proposed three-plane sensor scheme. On the other hand, the length, number, and diameter of the objects outside the measurement plane influences the fringe effect, but the three-plane sensor scheme can work properly to reduce the fringe effect in each case. Note that the reconstructed images of non-conductive objects are good, while the reconstructed images of conductive ones suffer from artefacts. Compared with the non-conductive objects, the conductive objects can attractive more electric field lines. Severe distortion of fringe electric fields means that the axial distance between two adjacent electrode planes cannot be too large. Otherwise, the fringe effect sensed by these two electrode planes would be very different, making the compensation be ineffective. This gives a guidance on the design of the multi-plane ERT sensor, i.e., with a sufficiently small distance between adjacent electrode planes. the rod inside the measurement plane and the fringe effect induced by the outside rods cannot be reduced when the gap between adjacent electrode planes is 3 cm. For conductive rods, artefacts are obvious in the reconstructed images. The fringe effect induced by the rod with longer length cannot be compensated when the gap of the adjacent plane is 3 cm. This phenomenon also proved that the increase in the gap between the measurement plane and compensation plane will lead to the decrease in efficiency of the proposed three-plane sensor scheme. The reconstructed images represent the local optimum for the corresponding distributions with the single-plane or three-plane ERT sensors. This indicates that the accuracy is improved with the three-plane ERT sensor, compared with the single-plane ERT sensor for the specified object distributions. A smaller gap between the adjacent electrode planes is more suitable for the proposed three-plane sensor scheme. On the other hand, the length, number, and diameter of the objects outside the measurement plane influences the fringe effect, but the three-plane sensor scheme can work properly to reduce the fringe effect in each case. Note that the reconstructed images of non- Two-Plane ERT Sensor Scheme The proposed three-plane sensor scheme would have some issues in practice. Three electrode planes increase the complexity of the sensor design. More measurement channels are needed, which is related to the data acquisition hardware. Therefore, it is meaningful to investigate the performance of two-plane sensor scheme on reducing the fringe effect. For the two-plane ERT sensor scheme, the electrode plane below the middle plane in the three-plane sensor scheme is removed with only one auxiliary plane for the compensation of fringe effect. According to the above simulation results, the fringe effect induced by outside objects may be compensated with the measurements in the auxiliary electrode plane. Compensation can be made by adaptively using the method proposed for the three-plane sensor scheme, taking the following form: As shown in Figure 11, similar cross-sectional object distributions to the previous simulation were used. For every set-up object distribution, rods with the length of 3 cm and 24.5 cm are used for comparison. In Figure 11a, the rod is moved up about by 3 cm. The rod of 3 cm long is used for simulation in this case and it is represented by 'L*' in Figure 12. For the object distribution in Figure 11b,c, the rods outside the measurement plane have two different cases, i.e., rods in the same side or opposite side regarding the measured plane. In Figure 11b, one case is placing two same rods above the top end of measurement electrodes by 0.5 cm. Another case is moving the left rods to the opposite side where the rod is below the bottom end of measured electrodes by 0.5 cm. In Figure 11c, some changes were made regarding the lengths of three rods. Firstly, 'rod a' is as long as the sensor, which is inside the measurement plane. The other two rods were set up in a similar way to in the two-rods distribution in Figure 11b regarding the axial positions of rods. As shown in Figure 11, similar cross-sectional object distributions to the previous simulation were used. For every set-up object distribution, rods with the length of 3 cm and 24.5 cm are used for comparison. In Figure 11a, the rod is moved up about by 3 cm. The rod of 3 cm long is used for simulation in this case and it is represented by 'L*' in Figure 12. For the object distribution in Figure 11b,c, the rods outside the measurement plane have two different cases, i.e., rods in the same side or opposite side regarding the measured plane. In Figure 11b, one case is placing two same rods above the top end of measurement electrodes by 0.5 cm. Another case is moving the left rods to the opposite side where the rod is below the bottom end of measured electrodes by 0.5 cm. In Figure 11c, some changes were made regarding the lengths of three rods. Firstly, 'rod a' is as long as the sensor, which is inside the measurement plane. The other two rods were set up in a similar way to in the two-rods distribution in Figure 11b regarding the axial positions of rods. As shown in Figures 12-14, the reconstructed images of the distributions in Figure 7a-c are obtained using the two-plane sensor scheme. The quality of the reconstructed images decreases significantly after compensation with the two-plane sensor scheme. Specifically, artefacts can be observed in the reconstructed images of single rod, especially in the area of the placed rod. For the reconstructed images of two rods, the rod below the measurement plane affects the image. This phenomenon can be observed in the reconstructed images of three rods. Therefore, for the proposed As shown in Figure 11, similar cross-sectional object distributions to the previous simulation were used. For every set-up object distribution, rods with the length of 3 cm and 24.5 cm are used for comparison. In Figure 11a, the rod is moved up about by 3 cm. The rod of 3 cm long is used for simulation in this case and it is represented by 'L*' in Figure 12. For the object distribution in Figure 11b,c, the rods outside the measurement plane have two different cases, i.e., rods in the same side or opposite side regarding the measured plane. In Figure 11b, one case is placing two same rods above the top end of measurement electrodes by 0.5 cm. Another case is moving the left rods to the opposite side where the rod is below the bottom end of measured electrodes by 0.5 cm. In Figure 11c, some changes were made regarding the lengths of three rods. Firstly, 'rod a' is as long as the sensor, which is inside the measurement plane. The other two rods were set up in a similar way to in the two-rods distribution in Figure 11b regarding the axial positions of rods. (a) single rod (b) two rods (c) three rods As shown in Figures 12-14, the reconstructed images of the distributions in Figure 7a-c are obtained using the two-plane sensor scheme. The quality of the reconstructed images decreases significantly after compensation with the two-plane sensor scheme. Specifically, artefacts can be observed in the reconstructed images of single rod, especially in the area of the placed rod. For the reconstructed images of two rods, the rod below the measurement plane affects the image. This phenomenon can be observed in the reconstructed images of three rods. Therefore, for the proposed two-plane sensor scheme, the compensation plane cannot compensate properly the fringe effect As shown in Figures 12-14, the reconstructed images of the distributions in Figure 7a-c are obtained using the two-plane sensor scheme. The quality of the reconstructed images decreases significantly after compensation with the two-plane sensor scheme. Specifically, artefacts can be observed in the reconstructed images of single rod, especially in the area of the placed rod. For the reconstructed images of two rods, the rod below the measurement plane affects the image. This phenomenon can be observed in the reconstructed images of three rods. Therefore, for the proposed two-plane sensor scheme, the compensation plane cannot compensate properly the fringe effect induced by the object, which is placed below the measured plane if the compensation plane is above the measurement plane. The fringe effect induced by the conductive objects causes severe artefacts in the reconstructed images. The effect is more obvious than that of three-plane sensor scheme. The two-plane sensor scheme is not affected by the variation in fringe effect due to changes in the length, diameter, and number of objects as well. The simulation results show that the proposed multi-plane sensor scheme can compensate for the fringe effect of ERT sensor, providing more accurate images of real object distributions inside the measurement plane. Experiment and Results An experimental system with a three-plane ERT sensor and a two-plane plane sensor is established to verify the simulation results. In this ERT system, the three-plane ERT sensor has three identical electrode planes with 3 cm gap between adjacent ones, while the gap between adjacent electrode plane of the two-plane ERT sensor is 2 cm. For the multi-plane ERT sensors, current sources are used to inject currents into multiple pairs of electrodes in the multiple electrode planes at the same time. Each current has a peak-to-peak magnitude of around 1 mA and nearly out of phase. The signal frequency of the injected AC current is 10 kHz for both the three-plane and two-plane sensor scheme. By multiplexing, each current is injected into a pair of adjacent electrodes in the corresponding electrode plane. The potential difference between each possible pair of adjacent Experiment and Results An experimental system with a three-plane ERT sensor and a two-plane plane sensor is established to verify the simulation results. In this ERT system, the three-plane ERT sensor has three identical electrode planes with 3 cm gap between adjacent ones, while the gap between adjacent electrode plane of the two-plane ERT sensor is 2 cm. For the multi-plane ERT sensors, current sources are used to inject currents into multiple pairs of electrodes in the multiple electrode planes at the same time. Each current has a peak-to-peak magnitude of around 1 mA and nearly out of phase. The signal frequency of the injected AC current is 10 kHz for both the three-plane and two-plane sensor scheme. By multiplexing, each current is injected into a pair of adjacent electrodes in the corresponding electrode plane. The potential difference between each possible pair of adjacent electrodes in each electrode plane is conditioned with a differential amplifier (amplified by 100 times Figure 14. Reconstructed images of three objects with two-plane ERT sensor. Experiment and Results An experimental system with a three-plane ERT sensor and a two-plane plane sensor is established to verify the simulation results. In this ERT system, the three-plane ERT sensor has three identical electrode planes with 3 cm gap between adjacent ones, while the gap between adjacent electrode plane of the two-plane ERT sensor is 2 cm. For the multi-plane ERT sensors, current sources are used to inject currents into multiple pairs of electrodes in the multiple electrode planes at the same time. Each current has a peak-to-peak magnitude of around 1 mA and nearly out of phase. The signal frequency of the injected AC current is 10 kHz for both the three-plane and two-plane sensor scheme. By multiplexing, each current is injected into a pair of adjacent electrodes in the corresponding electrode plane. The potential difference between each possible pair of adjacent electrodes in each electrode plane is conditioned with a differential amplifier (amplified by 100 times through two stages with each stage 10 times) and then measured by a data acquisition unit. Each measurement is sent to a PC for image reconstruction via USB. Finally, an image is reconstructed in MATLAB using the received data. Three-Plane ERT Sensor For the three-plane sensor scheme, three similar object distributions as in the initial simulation were setup in the experiment, i.e., a rod in center, a rod near pipe wall, and three rods in different axial and cross-sectional positions. The cross-sectional views of the normalized true distributions in the three setups are shown in Figure 15. All the rods are non-conductive with saline of conductivity 0.023 S/m as the background medium. These cylindrical rods for imaging have a diameter of 3 cm and a length of 20 cm. Note that in Figure 15c, only the rod inside the middle plane is shown in solid filling, with a length of 20 cm. The other two rods (dotted circles) have the same diameter of 3 cm and a length of 8 cm and placed above and below the middle plane, respectively, and at different cross-sectional positions as in Figure 5. They are away from the top or bottom ends of the electrodes in the middle plane by around 2 mm, respectively. With the proposed three-plane ERT sensor scheme, the reconstruction results of these three setups using the projected Landweber iterations are shown in Figure 16a-c. For comparison, the reconstruction results with a single-plane ERT sensor are shown in Figure 16d-f. Note that the singleplane ERT sensor only consists of the middle electrode plane in the three-plane ERT sensor. The weighting factors are chosen to be 0.05, 0.09, and 0.03 for the setups in Figure 15a-c, respectively, during the reconstruction. The relaxation factor is updated in each iteration according to the linear search method proposed by Liu et al. [15]. Figure 16 shows that the three-plane ERT sensor scheme can reduce over-estimation by the Landweber iteration for all the distributions and the fringe effect induced by objects outside the sensor plane, improving the quality and accuracy of reconstructed images significantly. This is consistent with the conclusions drawn from the previous simulation. With the proposed three-plane ERT sensor scheme, the reconstruction results of these three setups using the projected Landweber iterations are shown in Figure 16a-c. For comparison, the reconstruction results with a single-plane ERT sensor are shown in Figure 16d-f. Note that the single-plane ERT sensor only consists of the middle electrode plane in the three-plane ERT sensor. The weighting factors are chosen to be 0.05, 0.09, and 0.03 for the setups in Figure 15a-c, respectively, during the reconstruction. The relaxation factor is updated in each iteration according to the linear search method proposed by Liu et al. [15]. Figure 16 shows that the three-plane ERT sensor scheme can reduce over-estimation by the Landweber iteration for all the distributions and the fringe effect induced by objects outside the sensor plane, improving the quality and accuracy of reconstructed images significantly. This is consistent with the conclusions drawn from the previous simulation. weighting factors are chosen to be 0.05, 0.09, and 0.03 for the setups in Figure 15a-c, respectively, during the reconstruction. The relaxation factor is updated in each iteration according to the linear search method proposed by Liu et al. [15]. Figure 16 shows that the three-plane ERT sensor scheme can reduce over-estimation by the Landweber iteration for all the distributions and the fringe effect induced by objects outside the sensor plane, improving the quality and accuracy of reconstructed images significantly. This is consistent with the conclusions drawn from the previous simulation. Two-Plane ERT Sensor For the two-plane sensor scheme, four object distributions were established in the experiment, i.e., single rod in the center, two rods, and three rods in different cross-sectional and axial locations. 3D views of the true object distributions are shown in Figure 17. In the two-plane ERT sensor, the Two-Plane ERT Sensor For the two-plane sensor scheme, four object distributions were established in the experiment, i.e., single rod in the center, two rods, and three rods in different cross-sectional and axial locations. 3D views of the true object distributions are shown in Figure 17. In the two-plane ERT sensor, the bottom electrode plane is chosen to be the measurement plane, while the top plane is used to compensate for the fringe effect. Cylindrical non-conductive rods were used to setup the above object distributions, and the background medium is tap water. In Figure 17, nylon rods and sand-filled rods have different diameters, which are 6 cm and 8 cm, respectively, while the lengths of these two kinds of rods are 20 cm and 9 cm, respectively. In Figure 17a, the axial position of nylon rod is away from the top end of the electrodes in the bottom plane by around 2 mm. In Figure 17b, two nylon rods of 20 cm long are placed near the pipe wall, and their axial positions are the same as that of the single rod. In these two cases, the rods are outside the measured plane. In Figure 17c, nylon rods are replaced by sand-filled rods because of the length restriction of the used sensor. One sand rod of 9 cm long is placed below the bottom end of the electrodes in the measurement plane by around 2 mm, while another rod of the same length is above the top end of the measurement electrode by around 2 mm. The cross-sectional positions of two rods are the same as in Figure 17b. In Figure 16d, a nylon rod is placed inside the middle plane. Two sand-filled rods are placed outside the measurement plane. The axial positions of those two rods are the same as in Figure 17c. replaced by sand-filled rods because of the length restriction of the used sensor. One sand rod of 9 cm long is placed below the bottom end of the electrodes in the measurement plane by around 2 mm, while another rod of the same length is above the top end of the measurement electrode by around 2 mm. The cross-sectional positions of two rods are the same as in Figure 17b. In Figure 16d, a nylon rod is placed inside the middle plane. Two sand-filled rods are placed outside the measurement plane. The axial positions of those two rods are the same as in Figure 17c. Figure 18 shows the reconstructed images of the specified object distributions with the proposed two-plane sensor scheme. The reconstructed images with the conventional single-plane ERT sensor are also displayed for comparison. According to Figure 18, the two-plane ERT sensor scheme can reduce the fringe effect induced by objects outside the measurement sensor plane and overestimation by the Landweber iteration. The weighting factor are determined to be 0.31, 0.26, 0.21, and 0.18 for the imaging scenarios in Figure 18a-d respectively. However, it does not work when the objects are below the measurement plane, if the auxiliary electrode plane used for compensation is above the measurement plane. The experiment results are consistent with the simulation results. On the other hand, the image quality with the two-plane sensor scheme is not as good as that of the threeplane sensor scheme. Unlike the three-plane sensor scheme, the two-plane sensor scheme can adjust the role of measurement plane and the compensation plane. It means that every electrode plane of the two-plane ERT sensor can be the measurement plane or compensation plane, depending on the practical situation. For the experimental setups Figure 17c,d, the images are reconstructed by changing the measurement plane to the compensation plane. Meanwhile, the compensation plane changes to the measurement plane, as shown in Figure 19. The weighting factor are determined to be 0.275 and 0.237. The reconstruction results are consistent with previous findings. For the top rod in both cases, images were reconstructed. The over-estimation of its size is reduced as the its position change from the outside of the measurement plane to the inside of it. On the contrary, the bottom rod does not Figure 18 shows the reconstructed images of the specified object distributions with the proposed two-plane sensor scheme. The reconstructed images with the conventional single-plane ERT sensor are also displayed for comparison. According to Figure 18, the two-plane ERT sensor scheme can reduce the fringe effect induced by objects outside the measurement sensor plane and over-estimation by the Landweber iteration. The weighting factor are determined to be 0.31, 0.26, 0.21, and 0.18 for the imaging scenarios in Figure 18a-d respectively. However, it does not work when the objects are below the measurement plane, if the auxiliary electrode plane used for compensation is above the measurement plane. The experiment results are consistent with the simulation results. On the other hand, the image quality with the two-plane sensor scheme is not as good as that of the three-plane sensor scheme. affect the image because of the fringe effect induced by it was compensated by the compensation plane. The flexibility of two-plane senor scheme has been proved. (a) Two rods (b) Three rods Figure 19. Reconstruction results for two experimental setups (Figure 17c,d) with two-plane ERT sensor by altering the measurement plane and compensation plane. Unlike the three-plane sensor scheme, the two-plane sensor scheme can adjust the role of measurement plane and the compensation plane. It means that every electrode plane of the two-plane ERT sensor can be the measurement plane or compensation plane, depending on the practical situation. For the experimental setups Figure 17c,d, the images are reconstructed by changing the measurement plane to the compensation plane. Meanwhile, the compensation plane changes to the measurement plane, as shown in Figure 19. The weighting factor are determined to be 0.275 and 0.237. The reconstruction results are consistent with previous findings. For the top rod in both cases, images were reconstructed. The over-estimation of its size is reduced as the its position change from the outside of the measurement plane to the inside of it. On the contrary, the bottom rod does not affect the image because of the fringe effect induced by it was compensated by the compensation plane. The flexibility of two-plane senor scheme has been proved. (a) Two rods (b) Three rods Figure 19. Reconstruction results for two experimental setups (Figure 17c,d) with two-plane ERT sensor by altering the measurement plane and compensation plane. Conclusions This paper presents multi-plane ERT sensor schemes, three-plane and two-plane, to compensate for the fringe effect induced by objects outside the measurement sensor plane. Both simulation and experimental results validate the proposed method by considering the influence of the length, diameter, and axial position of objects on the fringe effect with respect to several object distributions. During image reconstruction, the objects outside the measurement plane are almost invisible in the reconstructed images using the multi-plane ERT sensor schemes. Meanwhile, it is found that the Landweber iteration would over-estimate the size of nonconductive object in the case of axially uniform distributions, due to the large conductivity contrast between the object and the background, which can be alleviated by the multi-plane sensor schemes. However, this may be not the case when the contrast becomes smaller, e.g., imaging two-phase flows with both phases conductive. In this case, other reconstruction algorithms may be applied to compensate for the reduction in the size of objects to be imaged, caused by the compensation for the fringe effect. Finally, the gap between the adjacent electrode planes should be sufficiently small so that the multi-plane sensor scheme can reduce the fringe effect effectively. It is important to determine the weighting factors for compensation, because all the weighting factors used previously are determined empirically. In the future, it is necessary to select the weighing factor adaptively for practical applications, which may be accomplished by a linear search method. Further simulation and experiments are needed to investigate the effectiveness of the proposed method in more complicated scenarios. For multiphase flow measurement, the proposed compensation method can be applied to reduce the fringe effect induced by the axially non-homogeneous distribution of dispersed phase, e.g., slug flow and plug flow are common in industrial processes. Due to the fringe effect, erroneous images of bubble columns will be obtained if large bubbles are outside the measurement electrode plane. With the Figure 19. Reconstruction results for two experimental setups (Figure 17c,d) with two-plane ERT sensor by altering the measurement plane and compensation plane. Conclusions This paper presents multi-plane ERT sensor schemes, three-plane and two-plane, to compensate for the fringe effect induced by objects outside the measurement sensor plane. Both simulation and experimental results validate the proposed method by considering the influence of the length, diameter, and axial position of objects on the fringe effect with respect to several object distributions. During image reconstruction, the objects outside the measurement plane are almost invisible in the reconstructed images using the multi-plane ERT sensor schemes. Meanwhile, it is found that the Landweber iteration would over-estimate the size of nonconductive object in the case of axially uniform distributions, due to the large conductivity contrast between the object and the background, which can be alleviated by the multi-plane sensor schemes. However, this may be not the case when the contrast becomes smaller, e.g., imaging two-phase flows with both phases conductive. In this case, other reconstruction algorithms may be applied to compensate for the reduction in the size of objects to be imaged, caused by the compensation for the fringe effect. Finally, the gap between the adjacent electrode planes should be sufficiently small so that the multi-plane sensor scheme can reduce the fringe effect effectively. It is important to determine the weighting factors for compensation, because all the weighting factors used previously are determined empirically. In the future, it is necessary to select the weighing factor adaptively for practical applications, which may be accomplished by a linear search method. Further simulation and experiments are needed to investigate the effectiveness of the proposed method in more complicated scenarios. For multiphase flow measurement, the proposed compensation method can be applied to reduce the fringe effect induced by the axially non-homogeneous distribution of dispersed phase, e.g., slug flow and plug flow are common in industrial processes. Due to the fringe effect, erroneous images of bubble columns will be obtained if large bubbles are outside the measurement electrode plane. With the proposed method, the quality of 2D ERT imaging can be improved to meet the measurement requirements in practical industrial applications.
13,505.8
2019-07-01T00:00:00.000
[ "Physics" ]
On different Versions of the Exact Subgraph Hierarchy for the Stable Set Problem Let $G$ be a graph with $n$ vertices and $m$ edges. One of several hierarchies towards the stability number of $G$ is the exact subgraph hierarchy (ESH). On the first level it computes the Lov\'{a}sz theta function $\vartheta(G)$ as semidefinite program (SDP) with a matrix variable of order $n+1$ and $n+m+1$ constraints. On the $k$-th level it adds all exact subgraph constraints (ESC) for subgraphs of order $k$ to the SDP. An ESC ensures that the submatrix of the matrix variable corresponding to the subgraph is in the correct polytope. By including only some ESCs into the SDP the ESH can be exploited computationally. In this paper we introduce a variant of the ESH that computes $\vartheta(G)$ through an SDP with a matrix variable of order $n$ and $m+1$ constraints. We show that it makes sense to include the ESCs into this SDP and introduce the compressed ESH (CESH) analogously to the ESH. Computationally the CESH seems favorable as the SDP is smaller. However, we prove that the bounds based on the ESH are always at least as good as those of the CESH. In computational experiments sometimes they are significantly better. We also introduce scaled ESCs (SESCs), which are a more natural way to include exactness constraints into the smaller SDP and we prove that including an SESC is equivalent to including an ESC for every subgraph. Introduction One of the most fundamental problems in combinatorial optimization is the stable set problem.Given a graph G = (V, E), a subset of vertices S ⊆ V is called stable set if no two vertices of S are adjacent.A stable set is called maximum stable set if there is no stable set with larger cardinality.The cardinality of a maximum stable set is called stability number of G and denoted by α(G).The stable set problem asks for a stable set of size α(G).It is an NP-hard and well-studied problem, see for example the survey of Bomze, Budinich, Pardalos and Pelillo [3]. In this paper we show that it makes sense to consider this new hierarchy, which we newly introduce as compressed (because the SDP is smaller) ESH (CESH).We prove that both the ESH and the CESH are equal to ϑ(G) on the first level and equal to α(G) on the n-th level.Furthermore, the SDP has a smaller matrix variable and and fewer constraints, so intuitively the CESH is computationally favorable.However, we prove that the bounds obtained by including an ESC into (T n+1 ) are always at least as good as those obtained from including the same ESC into (T n ), demonstrating that the bounds obtained from the ESH are at least as good as those from the CESH.Furthermore, it turns out in our computational comparison that the bounds are sometimes significantly worse for the CESH, but the running times do not significantly decrease.Hence, we confirm that the ESH has the better trade-off between the quality of the bound and the running time. The intuition behind the SDP (T n ) is different than the one of (T n+1 ), in particular for the solutions representing stable sets.We show in this paper that there is an alternative intuitive definition of exact subgraphs for (T n ).This leads to our new definition of scaled ESCs (SESC) and our introduction of another new hierarchy, the scaled ESH (SESH).We prove that SESCs coincide with the original ESCs for (T n ), which implies that the ESH and the SESH coincide. To summarize, in this paper we confirm that even though our new hierarchies based on exactness seem more intuitive and computational favorable, with off the shelve SDP solvers it is the best option to consider the ESH in the way it has been done so far.Our findings are in accordance with the results of [16], where it is observed that (T n+1 ) typically gives stronger bounds when strengthened. The rest of the paper is organized as follows.In Section 2 we give rigorous definitions of ESCs and the ESH and explain how they can be exploited computationally.In Section 3 we introduce the CESH and compare it to the ESH, also in the light of the results of [16].Then we introduce SESCs in Section 4 and investigate how they are related to the ESCs.In Section 5 we present computational results and we conclude our paper in Section 6. We use the following notation.We denote by N 0 the natural numbers starting with 0. By 1 d and 0 d we denote the vector or matrix of all ones and all zeros of size d, respectively.Furthermore, by S n we denote the set of symmetric matrices in R n×n .We denote the convex hull of a set S by conv(S) and the trace of a matrix X by trace(X).Moreover, diag(X) extracts the main diagonal of the matrix X into a vector.By x T and X T we denote the transposed of the vector x and the matrix X, respectively.Moreover, we denote the i-th entry of the vector x by x i and the entry of X in the i-th row and the j-th column by X i,j .Furthermore, we denote the inner product of two vectors x and y by x, y = x T y.The inner product of two matrices X = (X i,j ) 1 i,j n and Y = (Y i,j ) 1 i,j n is defined as X, Y = n i=1 n j=1 X i,j Y i,j .Furthermore, the t-dimensional simplex is given as ∆ t = λ ∈ R t : t i=1 λ i = 1, λ i 0 ∀1 i t . The Exact Subgraph Hierarchy In this section we recall exact subgraph constraints and the exact subgraph hierarchy for combinatorial optimization problems that have an SDP relaxation introduced by Adams, Anjos, Rendl and Wiegele in 2015 [1].We detail everything for the stable set problem, because in [1] they focused on Max-Cut.Besides mo-tivation and definitions, we provide new examples, discuss the representation of exact subgraph constraints and compare the exact subgraph hierarchy to other hierarchies from the literature. Lovász Theta Function We start by presenting the Lovász theta function.To to so, it is handy to consider the incidence vectors of stable sets and the polytope they span. Then the set of all stable set vectors S(G) and the stable set polytope STAB(G) are defined as S(G) = {s ∈ {0, 1} n : s i s j = 0 ∀{i, j} ∈ E} and STAB(G) = conv {s : s ∈ S(G)} . It is easy to see that the stability number α(G) is obtained by solving but unfortunately STAB(G) is very hard to describe in general.Several linear relaxations of STAB(G) have been considered, like the so-called fractional stable set polytope and the clique constraint stable set polytope.We refer to [18] for further details.We focus on another relaxation, namely the Lovász theta function ϑ(G), which is an upper bound on α(G).Grötschel, Lovász and Schrijver [18] proved and hence provided an SDP formulation of ϑ(G).This SDP has a matrix variable of order n + 1.Furthermore, there are m constraints of the form X i,j = 0, n constraints to make sure that diag(X) = x and one constraint ensures that in the matrix of order n + 1 the entry in the first row and first column is equal to 1. Hence, there are n + m + 1 linear equality constraints in (T n+1 ). To formulate (T n+1 ) in a more compact way we observe the well-known fact that X −xx T 0 if and only if 1 x T x X 0, see Boyd and Vandenberghe [4, Appendix A.5.5] on Schur complements.Thus, the feasible region of (T n+1 ) is Clearly for each element (x, X) of TH 2 (G) the projection of X onto its main diagonal is x.The set of all projections is called theta body.More information on TH(G) can be found for example in Conforti, Cornuejols and Zambelli [8].It is easy to see that STAB(G) ⊆ TH(G) holds for every graph G, see [18].Thus, ϑ(G) is a relaxation of α(G). Introduction of the Exact Subgraph Hierarchy In order to present the exact subgraph hierarchy we need a modification of the stable set polytope STAB(G), namely the squared stable set polytope. Definition 2. Let G = (V, E) be a graph.The squared stable set polytope STAB 2 (G) of G is defined as The matrices of the form ss T for s ∈ S(G) are called stable set matrices. Note that the elements of STAB(G) are vectors in R n , whereas the elements of STAB 2 (G) are matrices in R n×n .In comparison to STAB(G) the structure of STAB 2 (G) is more sophisticated and less studied.Only if G has no edges a projection of STAB 2 (G) coincides with a well-studied object, the boolean quadric polytope, see Padberg [27].In particular, by putting the upper triangle with the main diagonal into a vector for all elements of STAB 2 (G) we obtain the elements of the boolean quadric polytope. Let us now turn back to ϑ(G).The following lemma turns out to be the key ingredient for defining the exact subgraph hierarchy. Lemma 1.If we add the constraint X ∈ STAB 2 (G) into (T n+1 ) for a graph G, then the optimal objective function value is α(G), so Proof.Let (P E ) be the SDP on the right-hand side of (1), let z E be its optimal objective function value and let S(G) = {s 1 , . . ., s t }. Let without loss of generality s t be the incidence vector of a maximum stable set of G. Then clearly x = s t and X = s t s T t is feasible for (P E ) and has objective function value α(G), so α(G) z E holds. Furthermore, any feasible solution (x, X) of (P E ) can be written as for some λ ∈ ∆ t because X ∈ STAB 2 (G) holds.Thus, x can be written as In consequence, the objective function value of (x, X) for (P E ) is equal to and hence z E α(G) holds, which finishes the proof. Lemma 1 implies that if we add the constraint X ∈ STAB 2 (G) to (T n+1 ), then we get the best possible bound on α(G), namely α(G).Unfortunately, depending on the representation of the constraint, we either include an exponential number of new variables (if we use a formulation as convex hull) or inequality constraints (if we include inequalities representing facets of STAB 2 (G), see Section 2.3) into the SDP.In order to only partially include X ∈ STAB 2 (G) we exploit a property of stable sets, namely that a stable set of G induces also a stable set in each subgraph of G. To formalize this in an observation, we first need the following definition.Definition 3. Let I ⊆ V be a subset of the vertices of the graph G = (V, E) with |V | = n and let k I = |I|.We denote by G I the subgraph of G that is induced by I. Furthermore, we denote by X I = (X i,j ) i,j∈I the submatrix of X ∈ R n×n which is indexed by I. Proof.As X I ∈ STAB 2 (G I ) for all I ⊆ V implies X ∈ STAB 2 (G) for I = V , one direction of the equivalence is trivial.For the other direction note that X ∈ STAB 2 (G) implies that X is a convex combination of ss T for stable set vectors s ∈ S(G).From this one can easily extract a convex combination of ss T for s ∈ S(G I ) for X I , thus X I ∈ STAB 2 (G I ) for all I ⊆ V . Observation 1 implies that adding the constraint X ∈ STAB 2 (G) to (T n+1 ) as in Lemma 1 makes sure that the constraint X I ∈ STAB 2 (G I ) is fulfilled for all subgraphs G I of G.This gives rise to the following definition.Definition 4. Let G = (V, E) be a graph and let I ⊆ V .Then the exact subgraph constraint (ESC) for G I is defined as X I ∈ STAB 2 (G I ). Definition 5. Let G = (V, E) be a graph with |V | = n and let J be a set of subsets of V .Then z E J (G) is the optimal objective function value of (T n+1 ) with the ESC for every subgraph induced by a set in J, so Furthermore, for k ∈ N 0 with k n let J k = {I ⊆ V : |I| = k}.Then the k-th level of the exact subgraph hierarchy (ESH) is defined as In other words the k-th level of the ESH is the SDP for calculating the Lovász theta function (T n+1 ) with additional ESCs for every subgraph of order k.Due to Lemma 1 every level of the ESH is a relaxation of (1). Note that Adams, Anjos, Rendl and Wiegele did not give the hierarchy a name.However, they called the ESCs for all subgraphs of order k and therefore the constraint to add at the k-th level of the ESH the k-projection constraint. Let us briefly look at some properties of z E k (G).For example, the next lemma shows that the bound obtained from the ESH is better the higher the level of the ESH is. For k = 0 we do not add any additional constraint into (T n+1 ).For k = 1 the ESC for I = {i} boils down to X i,i ∈ [0, 1], which is enforced by X 0. Therefore, ϑ(G) = z E 0 (G) = z E 1 (G) holds.Additionally, due to Lemma 1 whenever all subgraphs of order k are exact, also all subgraphs of order k − 1 are exact, which yields the desired result. Next, we consider an example in order to get a feeling for the ESH and how good the bounds on α(G) obtained with it are. Example 1.We consider z E k (G) for k 8 for a Paley graph, a Hamming graph [10] and a random graph G 60,0.25 from the Erdős-Rényi model in Table 1.It is possible to compute z E 2 (G).For k 3 we use relaxations (i.e.we compute z E J (G) by including the ESCs only for a subset J of the set of all subgraphs of order k and determine the sets J as it is described in more detail in Section 5) to get an upper bound on z E k (G) or deduce the value.For hamming6 4 already for k = 2 the upper bound is an excellent bound on α(G) for this graph.For G 60,0.25 as k increases z E k (G) improves little by little.For k = 4 the floor value of z E k (G) decreases, which is very important in a branchand-bound framework, where this potentially reduces the size of the branch-andbound tree drastically.For the Paley graph on 61 vertices only for k 6 the value of z E k (G) improves towards α(G).This example represents one of the worst cases, where including ESCs for subgraphs of small order does not give an improvement of the upper bound.Example 1 shows that there are graphs where including ESCs for subgraphs of small order improves the bound very much, little by little and not at all.It is not surprising that the ESH does not give outstanding bounds for all instances, as the stable set problem is NP-hard. Representation of Exact Subgraph Constraints Next, we briefly discuss the implementation of ESCs.In Definition 2 we introduced STAB 2 (G) as convex hull, so the most natural way to formulate the ESC is as a convex combination as in the proof of Lemma 1.We start with the following definition.Definition 6.Let G be a graph and let G I be the subgraph induced by I ⊆ V .Furthermore, let |S(G I )| = t I and let S(G I ) = s I 1 , . . ., s I tI .Then the i-th stable set matrix S I i of G I is defined as S I i = s I i (s I i ) T .Now the ESC X I ∈ STAB 2 (G I ) can be rewritten as and it is natural to implement the ESC for subgraph G I as This implies that for the implementation of the ESC for G I we include t I additional non-negative variables, one additional equality constraint for λ I and a matrix equality constraint of size k I × k I that couples X I and λ I into (T n+1 ). There is also a different possibility to represent ESCs that uses the following fact.The polytope STAB 2 (G I ) is given by its extreme points, which are the stable set matrices of G I .Due to the Minkowski-Weyl's theorem it can also be represented by its facets, i.e. by (finitely many) inequalities.A priory different subgraphs induce different stable set matrices and hence also different squared stable set polytopes.The next result allows us to consider the squared stable set polytope of only one graph for a given order. Proof.If X ∈ STAB 2 (G), then by definition X is a convex combination of stable set matrices of G. Then it is also a convex combination of stable set matrices of G 0 n , which are all possible stable set matrices of order n.Hence, then X is a convex combination of all possible stable set matrices of order n.Consider an edge {i, j} ∈ E, then by assumption X i,j = 0. Since all entries of stable set matrices are 0 or 1, this implies that whenever the entry (i, j) of a stable set matrix in the convex combination is not equal to zero, its coefficient is zero.Therefore, in the convex combination only stable set matrices which are also stable set matrices of G have non-zero coefficients and thus X ∈ STAB 2 (G). As a consequence of Lemma 3 we can replace the ESC X I ∈ STAB 2 (G I ) by the constraint X I ∈ STAB 2 (G 0 kI ) whenever we add the ESC to (T n+1 ).Thus, it is enough to have a facet representation of STAB 2 (G 0 kI ) in order to include the ESC for G I represented by inequalities into (T n+1 ). In order to obtain all facets of STAB 2 (G 0 k ) for a given k we can use the fact that a projection of STAB 2 (G 0 k ) is the boolean quadric polytope of size k as already explained in Section 2.2.Deza and Laurent [9] called the boolean quadric polytope of size k the correlation polytope of size k.They showed that the correlation polytope of size k is in one-to-one correspondence with the cut polytope of size k+1 via the so-called covariance map.Moreover, they presented a complete list of the facets of the cut polytopes up to a size of k + 1 = 7, gave several references of other lists of facets and furthermore linked to a web page.The recent version of this web page is maintained by Christof [6] and a conjectured complete facet description of the cut polytope of size k + 1 = 8 and a possibly complete description of the cut polytope of size k + 1 = 9 can be found there.Therefore, we could take this list and go back via the covariance map to transfer it into a complete list of facets of STAB 2 (G 0 k ).However, we take a more direct path and use the software PORTA [7] in order to obtain all inequalities that represent facets of STAB 2 (G 0 k ) from its extreme points for a given k.The number of facets for all k 6 is presented in Table 2. For a subgraph G I of order k I = 3 with I = {i, j, k} the ESCs is equivalent to (3) for all three sets {i, j}, {i, ℓ} and {j, ℓ} and the following inequalities so 3 • 4 + 4 = 16 inequalities, which matches Table 2.We come back to these inequalities in Section 2.4 and Section 3.4. To summarize, we have discussed two different options to represent ESCs, one as convex combination and one as inequalities that represent facets. Comparison to Other Hierarchies In this section we compare the ESH for the stable set problem to other hierarchies, as it has never been done before. The most prominent hierarchies of relaxations for general 0-1 programming problems are the hierarchies by Sherali and Adams [29], by Lovász and Schrijver [25] and by Lasserre [22].We refer to Laurent [23] for rigorous definitions, comparisons and for details of applying them to the stable set problem. In fact the Lasserre hierarchy is a refinement of the Sherali-Adams hierarchy which is a refinement of the SDP based Lovász-Schrijver hierarchy.All three hierarchies are exact at level α(G), so after at most α(G) steps STAB(G) is obtained. Silvestri [30] observed that z E 2 (G) is at least as good as the upper bound obtained at the first level of the SDP hierarchy of Lovász-Schrijver.This is easy to see, because this SDP is (T n+1 ) with non-negativity constraints for X, and every X I ∈ STAB(G I ) is entry-wise non-negative due to (3a).Furthermore, Silvestri proved that the bound on the k-th level of the Lasserre hierarchy is at least as good as z E k (G), so the Lasserre hierarchy yields stronger relaxations than the ESH. A drawback of all the above hierarchies is that the size of the SDPs to solve grows at each level.In particular, the SDP at the k-th level of the Lasserre hierarchy has a matrix variable with one row for each subset of i vertices of the n vertices for every 1 i k.Therefore, the matrix variable is of order k i=0 n i .For the ESH this order remains n + 1 on each level and only the number of constraints increases. Another big advantage of the ESH over the Lasserre hierarchy is that it is possible to include partial information of the k-th level of the hierarchy, which was exploited by Gaar and Rendl [13,14,15].In the case of the Lasserre hierarchy one needs the whole huge matrix in order to incorporate the information.Due to that Gvozdenović, Laurent and Vallentin [20] introduced a new hierarchy where they only consider suitable principal submatrices of the huge matrix. Eventually we want to compare the ESH with other relaxations of ϑ(G) towards α(G).Lovász and Schrijver [25] proposed to add inequalities that boil down to (3a), and inequalities of the form (4c) and (4d) whenever {i, j} ∈ E. Hence, z E k (G) is as least as good as this bound for all k 3. Furthermore, Gruber and Rendl [19] proposed to add inequalities of the form (4c) and (4d) also if {i, j} ∈ E, hence the k-th level of the ESH is as least as strong as this relaxation for every k 3. Note that Fischer, Gruber, Rendl and Sotirov [12] add triangle inequalities into an SDP relaxation of Max-Cut.Therefore, applying the ESH to the Max-Cut relaxation as it is done by in [15] can be viewed as generalization of the approach in [12]. For a discussion of other approaches for improving a relaxation by including information of smaller polytopes into the relaxation see [1]. The Compressed Exact Subgraph Hierarchy In this section we newly introduce a variant of the ESH, namely the compressed ESH, which at first sight is computational favorable to the ESH, as it starts from a smaller SDP formulation of the Lovász theta function.Additionally, we compare this new hierarchy to the ESH and to other hierarchies from the literature. Two SDP Formulations of the Lovász Theta Function The starting point of the new compressed ESH is an SDP formulation of the Lovász theta function ϑ(G) by Lovász [24], namely As the feasible region of (T n ) will be used later, we define Before we continue, we compare the two SDP formulations (T n+1 ) and (T n ) of ϑ(G).As already mentioned (T n+1 ) is an SDP with a matrix variable of order n + 1 and n + m + 1 equality constraints.The formulation (T n ) has a matrix variable of order n and m + 1 constraints, so both the number of variables and constraints is smaller.Hence, in computations (T n ) seems favorable. So far, there has been a lot of work on comparing (T n+1 ) and (T n ).Gruber and Rendl [19] showed the following.If (x * , X * ) is a feasible solution of (T n+1 ), then X ′ = 1 trace(X * ) X * is a feasible solution of (T n ) which has at least the same objective function value.Hence, an optimal solution of (T n+1 ) can be transformed into an optimal solution of (T n ).They also proved that whenever X ′ is optimal for (T n ), then X * = 1 n×n , X ′ X ′ is optimal for (T n+1 ).Furthermore, Yildirim and Fan-Orzechowski [31] gave a transformation from a feasible solution X ′ of (T n ) to obtain x * of a feasible solution (x * , X * ) of (T n+1 ) with at least the same objective function value.Galli and Letchford [16] showed how to construct a corresponding X * .For an optimal X ′ the obtained optimal (x * , X * ) coincides with the one of Gruber and Rendl.Further details can be found in [16], where also the influence of adding certain cutting planes into (T n+1 ) and (T n ) is discussed.We come back to that later in Section 3.4. Introduction of the Compressed Exact Subgraph Hierarchy Next, we newly introduce the compressed exact subgraph hierarchy, a hierarchy similar to the ESH, but it starts from (T n ) instead of starting from (T n+1 ).First, we verify that it makes sense to build such a hierarchy. Lemma 4. If we add the constraint X ∈ STAB 2 (G) into (T n ) for a graph G, then the optimal objective function value is α(G), so Proof.Let (P C ) be the SDP on the right-hand side of (5), let z C be its optimal objective function value and let S(G) = {s 1 , . . ., s t }. Let without loss of generality s t be the incidence vector of a maximum stable set of G, and s 1 be the incidence vector of the empty set, which is of course stable.Then clearly Furthermore, any feasible solution X of (P C ) can be written as In consequence, the objective function value of X for (P C ) is equal to and hence z C α(G) holds, which finishes the proof. Lemma 4 corresponds to Lemma 1 for the ESH and justifies the introduction of the compressed exact subgraph hierarchy.Definition 7. Let G = (V, E) be a graph with |V | = n and let J be a set of subsets of V .Then z C J (G) is the optimal objective function of (T n ) with the ESC for every subgraph induced by a set in J, so For k ∈ N 0 with k n the k-th level of the compressed exact subgraph hierarchy (CESH) is defined as As in the case of the ESH we can deduce the following result for the CESH. Proof.Analogous to the proof of Lemma 2. Hence, due to Lemma 2 and Lemma 5 both the ESH and the CESH start at ϑ(G) at level 1 and reach α(G) on level n. Comparison to Other Hierarchies Before we continue to consider the differences between the ESH and the CESH, we compare the CESH with other relaxations of α(G) based on (T n ). Schrijver [28] suggested to add non-negativity constraints into (T n ) to obtain stronger bounds.Galli and Letchford [16] proved that it is equivalent to include non-negativity constraints into (T n+1 ) and (T n ), so z E 2 (G) is a stronger bound than this one because it induces non-negativity in (T n+1 ).Lemma 3 implies that also for (T n ) it is equivalent to include X I ∈ STAB 2 (G I ) and X I ∈ STAB 2 (G 0 kI ), so z C 2 (G) induces non-negativity due to (3a).Hence, also z C 2 (G) is as least as good as the bound of Schrijver. Dukanovic and Rendl [11] proposed to add so-called triangle inequalities to (T n ).Silvestri [30] showed that z C 3 (G) is at least as good as upper bound as the bound of Dukanovic and Rendl.This is intuitive, because the triangle inequalities correspond to (4a), (4b) and (4c) and therefore represent faces of STAB 2 (G I ) for k I = 3.As a result, the CESH can be seen as a generalization of the relaxation of [11]. Comparison of the CESH and the ESH Now we continue our comparison of the bounds based on the ESH and our new CESH. Theorem 1.Let G = (V, E) be a graph with |V | = n and let J be a set of subsets of V .Then z E J (G) z C J (G).Proof.We consider the transformation of an optimal solution of (T n+1 ) into an optimal solution of (T n ) by Gruber and Rendl [19].We show that this transformation applied to the optimal solution of (2) yields a feasible solution of ( 6) with at least the same objective function value, thus Towards that end, let (x * , X * ) be an optimal solution of ( 2) and γ = z E J (G) = 1 T n x * its objective function value.Let X ′ = 1 γ X * .First, we show that X ′ is feasible for (6).Clearly X * − x * (x * ) T 0 and γ 0 imply X ′ 0. Furthermore, due to X * i,j = 0 for all {i, j} ∈ E we have What is left to check for feasibility are the ESCs.We can rewrite X * I ∈ STAB 2 (G I ) as X * I = tI i=1 λ I i S I i for tI i=1 λ I i = 1 and λ I i 0 for all 1 i t I .Let w.l.o.g.S I 1 be the zero matrix of dimension k I × k I , i.e. the first stable set matrix corresponds to the empty set.Then we define It is easy to see that λ I i ′ 0 for all 1 i t I and that tI i=1 holds.Furthermore, because S I 1 is a zero matrix and so γ−1 γ S I 1 = 0, we have As a consequence X ′ I ∈ STAB 2 (G I ) and thus X ′ I is feasible for (6).It remains to determine the objective function value of X ′ I for (6).From X * −x * (x * ) T 0 it follows that 1 T n (X * −x * (x * ) T )1 n 0 and hence 1 n×n , X * − x * (x * ) T 0. This implies that holds, thus To summarize, X ′ is a feasible solution of ( 6) with objective function value γ = z E J (G).Therefore, the optimal objective function value of the maximization problem ( 6) is at least z E J (G), so z E J (G) z C J (G).Theorem 1 states that the bounds obtained by starting from (T n+1 ) and including some ESCs is always at least as good as the bound obtained by starting from (T n ) and including the same ESCs.In particular, this implies that the relaxation on the k-th level of the ESH is at least as good as the relaxation on the k-th level of the CESH, which is formalized in the following corollary. . We now further investigate the theoretical difference between the ESH and the CESH, especially in the light of the results of Galli and Letchford [16].They proved that whenever a collection of homogeneous inequalities is added to (T n+1 ), the resulting optimal solution yields a feasible solution for (T n ) with the same collection of inequalities, which has at least the same objective function value.This implies that adding homogeneous inequalities to (T n+1 ) gives stronger bounds on α(G) than adding the same inequalities to (T n ). If we consider the ESCs in more detail as we did in Section 2.3, then in turns out that for k = 2 the inequalities (3a), (3b) and (3c) are homogeneous, while (3d) is inhomogeneous, so inhomogeneous inequalities are needed to represent ESCs. Next, we give an intuition for the different behavior of inhomogeneous inequalities for the two SDP formulations of the Lovász theta function (T n+1 ) and (T n ).Let (x * , X * ) be an optimal solution of (T n+1 ) with additional constraints (3).From the proof of Lemma 1 we know that X ′ = 1 γ X * is a feasible solution of (T n ) with additional constraints (3).Indeed, the homogeneous inequalities (3a), (3b) and (3c) are preserved under scaling, matching [16].Scaling (3d) with 1 γ yields that X ′ satisfies and since 1 γ 1 it follows that X ′ satisfies (3d).If X ′ an optimal solution of (T n ) with additional constraints (3) and we use the transformation X * = γX ′ , then clearly X * satisfies (3a), (3b) and (3c).Scaling (3d) with γ yields that holds for X * .This does not imply that X * fulfills (3d) as γ 1. To summarize, this consideration confirms that the ESCs for k I = 2 yield a stronger restriction in (T n+1 ) than they do in (T n ).This gap of the bounds gets even larger for larger k I , so for example for k I = 3 the inequality (4d) is inhomogeneous.This concludes our investigation of the new CESH. The Scaled Exact Subgraph Hierarchy In Section 3 we saw that including an ESC into (T n+1 ) as in the ESH gives a stronger bound than including the same ESC into (T n ) as in the CESH.In this section we investigate whether this is due to a suboptimal definition of the ESCs for the later case.In particular, we go back to the intuition behind ESCs for (T n+1 ) and transfer this intuition to (T n ).This will lead to the new definitions of scaled ESCs and the scaled ESH.We will explore this hierarchy and compare the CESH and the scaled ESH in detail. Introduction of the Scaled Exact Subgraph Hierarchy To start, observe the following.It can be confirmed easily that both (T n+1 ) and (T n ) are upper bounds on α(G).Let s ∈ S(G) be a stable set vector that corresponds to a maximum stable set.Then X * = ss T is feasible for (T n+1 ) and has objective function value α(G).Therefore, intuitively STAB 2 (G) defines exactly the appropriate polytope for (T n+1 ). For (T n ) the matrix X ′ = 1 s T s ss T yields a feasible solution with objective function value α(G), whereas X * = ss T is not feasible unless α(G) = 1.Hence, intuitively it makes more sense to consider the polytope spanned by matrices of the form 1 s T s ss T for s ∈ S(G) for (T n ) than to consider STAB 2 (G).This leads to the following definition.Definition 8. Let G = (V, E) be a graph with |V | = n.Then the scaled squared stable set polytope SSTAB 2 (G) of G is defined as The goal of this section is to investigate a new modified version of the CESH based on the scaled squared stable set polytope defined in the following way.Definition 9. Let G = (V, E) be a graph and let I ⊆ V .Then the scaled exact subgraph constraint (SESC) for G I is defined as X I ∈ SSTAB 2 (G I ).Furthermore, let |V | = n and let J be a set of subsets of V .Then z S J (G) is the optimal objective function value of (T n ) with the SESC for every subgraph induced by a set in J, so For k ∈ N 0 with k n the k-th level of the scaled exact subgraph hierarchy (SESH) is defined as z S k (G) = z S J k (G).Note that with the considerations above it does not make sense to include the SESC for the whole graph G into (T n+1 ), as this SDP does not yield an upper bound on α(G), because all solutions corresponding to α(G) are not feasible.Hence, we introduce a hierarchy based on SESCs only starting from (T n ) and not from (T n+1 ). Additionally, note that a priory we do not know whether the SESH has as nice properties as the ESH and the CESH. Comparison of the SESH and the CESH The next lemma is the key ingredient to compare the SESH to the CESH.Lemma 6.Let G = (V, E) be a graph.Then X ∈ SSTAB 2 (G) holds if and only if X ∈ STAB 2 (G) and trace(X) 1. If X ∈ SSTAB 2 (G), then X can be written as for λ ∈ ∆ t and therefore X ∈ STAB 2 (G).Hence, X ∈ SSTAB 2 (G) implies that X ∈ STAB 2 (G) and trace(X) 1 holds.Now assume X ∈ STAB 2 (G) and trace(X) 1.Then X can be rewritten as We define λ i = λ i s T i s i for 2 i t and λ 1 = 1 − t i=2 λ i .Then clearly λ i 0 holds for 2 i t.Furthermore, (8) implies that λ 1 0 holds, so λ ∈ ∆ t .This together with Lemma 6 allows us to prove the following. Theorem 2. Let G = (V, E) be a graph and let J be a set of subsets of V . Then J (G) can be replaced by the ESC X I ∈ STAB 2 (G I ) and trace(X I ) 1.The latter is redundant, as trace(X) = 1 is fulfilled by all X ∈ CTH 2 (G) and all elements on the main diagonal of X are non-negative because X 0. Thus, Theorem 2 implies that the SESH and the CESH coincide and in particular that the SESH has the same properties as the CESH stated in Lemma 2, which we now forumlate explicitly. Hence, even though intuitively it makes more sense to add SESCs into (T n ) instead of ESCs, both versions give the same bound and the SESH and the CESH coincide. Computational Comparison In the previous sections we have theoretically investigated first the original ESH, which starts from (T n+1 ) and includes ESCs.Next, we introduced the CESH, which starts from (T n ) and includes ESCs and finally the SESH which starts from (T n ) and includes SESCs.Each of these hierarchies can be exploited computationally by including a wisely chosen subset J of all possible ESCs or SESCs.We denote the resulting bounds based on the ESH, the CESH and the SESH by z E J (G), z C J (G) and z S J (G), respectively.So far we have proven in Lemma 1 and Theorem 2 that z S J (G) = z C J (G) z E J (G) holds for all graphs G and for all set of subsets J, hence the bounds based on the CESH and the SESH coincide and the bounds based on the ESH are always as least as good as those bounds. In this section we compare the ESH and the CESH computationally.We refrain from computations with SESH since both the obtained bounds and the sizes of the SDPs are the same for SESH and CESH.First, we are interested in whether z E J (G) is significantly better than z C J (G).Second, we are interested in the running times.In theory, the running times for z C J (G) should be smaller, because the matrix variable is of order n instead of n + 1 and the number of equality constraints is n less. We consider several graphs in various settings.Some graphs are from the Erdős-Rényi model G(n, p) for different values of n and p (the probability that an edge is present in the graph), some are complement graphs of graphs of the second DIMACS implementation challenge [10] and some come from the house of graphs collection [5].Furthermore, there is a spin glass graph (see [12]), a Paley graph, a circulant and a cubic graph among the instances.In the computations we always compare including all ESCs of the same set J into (T n+1 ) and (T n ), so we compute z E J (G) and z C J (G).The source code and all the used graphs are available online at https://arxiv.org/src/2003.13605/anc. All computations are done on an Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz with 32 GB RAM with MATLAB.We use the interior point solver MOSEK [26] for solving the SDPs.Note that there is a lot of research on how to solve SDPs of the form (2) much faster using the bundle method, see Gaar [13] and Gaar and Rendl [14,15].We refrain from using these involved methods, as we are interested in comparing the bounds in a simple way. In the first experiment, we compare levels of the ESH and the CESH.For including all possible ESCs of order k into a graph of order n we have n k additional ESCs to the SDPs (T n+1 ) and (T n ), so these computations are out of reach rather quickly.Table 3 summarizes the values of z C J (G) and z E J (G) for including all ESCs for k ∈ {0, 2, 3, 4} and presents the running times in seconds to solve the corresponding SDPs. First, we note that indeed the computation of ϑ(G) (corresponds to the column k = 0) yields the same value for computing it via (T n+1 ) and (T n ).Furthermore, the computations confirm that z E J (G) z C J (G) holds for all graphs G. On the second level of the ESH and the CESH the two values coincide for almost all graphs.Only the instances HoG 34272, HoG 34274 and HoG 34276 show a significant difference.On the third and fourth level the difference is more substantial.This is not surprising, as there are more inhomogeneous facets defining STAB 2 in these cases.In the running times there is almost no difference for small graphs with not so many ESCs.Only if the number of ESCs becomes larger, typically the computation time for z C J (G) is significantly shorter.However, most of the times this comes with a worse bound. Computing the k-th level of the ESH and the CESH by including all ESCs of order k is beyond reach rather soon, so in the next experiments we want to include the ESCs only for some subgraphs of a given order k.In order to determine the set J of subgraphs for which to include the ESCs we follow the approach of Gaar and Rendl [14,15].In particular, we start with J = ∅ and iteratively solve an SDP for computing the Lovász theta function (either (T n+1 ) and (T n )) with the already determined ESCs induced by J. Then we use the optimal solution of the SDP in order to search for violated ESCs.To find potentially violated subgraphs we perform a heuristic search among all subgraphs that tries to minimize the inner product of the optimal solution corresponding a subgraph and certain matrices (e.g., matrices that induce facets of STAB 2 (G 0 k )).We refer to [14,15] for more details.We perform 10 iterations with including at most 200 ESCs of order k in each iteration, so in the end for each graph and for each k we have a set J of at most 2000 ESCs.Of course it makes a difference whether we do the search starting from (T n+1 ) and (T n ) as different subgraphs might be violated.We denote by J E and J C the set of subsets obtained by using (T n+1 ) and (T n ) in order to search for violated subgraphs.The used sets J E and J C are available online at https://arxiv.org/src/2003.13605/anc.Table 4 summarizes the cardinalities of J E and J C .The values of z E J (G) and z C J (G) and the running time for the sets J = J E can be found in Table 5 and Table 6.The analogous computational results when considering J = J C are presented in Table 7 and Table 8.First, observe in Table 4 that the cardinality of J C is typically larger compared to the cardinality of J E .This is plausible, because due to the additional row and column in (T n+1 ) and the SDP constraint in this formulation some ECSs might be satisfied, which are violated in the version with (T n ). When we turn to the values of z E J (G) and z C J (G) in Table 5 and Table 7 we observe for both J = J E and J = J C that (a) the larger k becomes, the better the bounds are, (b) for k = 0, so for computing ϑ(G), we have z E J (G) = z C J (G) as expected, (c) for a fixed set J we have z E J (G) z C J (G) in accordance with the theory derived earlier and (d) typically the difference between z E J (G) and z C J (G) increases with increasing k.This behavior is observable for both J = J E and J = J C , hence the choice of the set J has no significant influence on the behavior of the values of z E J (G) and z C J (G).However, we observe that usually the values of z E J (G) for J E are the best bounds, then z E J (G) for J C are the second best bounds, z C J (G) for J C are the third best bounds and z C J (G) for J E yields the worst bounds -even if the differences are typically very small.This behavior is not surprising, because we know that for a fixed set J we have z E J (G) z C J (G) and it makes sense that the final bounds obtained are better when using the same formulation of ϑ(G) to obtain the bounds that was used to obtain J. Looking at the running times in Table 6 and Table 8 we see that our expectations are not met: Even though the order of the matrix variable and the number of constraints of the SDP to compute z C J (G) are smaller than those to compute z E J (G), the running times are typically larger.So apparently the highly sophisticated interior point solver MOSEK can deal better with z E J (G).If we compare the running times for the set J = J E and J = J C we see that the running times for J E typically are shorter, but there are also instances (e.g., G 100 025 for k = 6) where the computation of both z E J and z C J is faster for J = J C than it is for J = J E . As a result, we confirm that tightening the Lovász theta function towards the stability number with the help of ESCs typically works better when starting from the Lovász theta function formulation (T n+1 ) (as it is done in the ESH) as it does when starting with the formulation (T n ) (as it is done in the CESH), even though this is not obvious at first sight as the latter SDPs are smaller.However, in some cases it can be advantageous to use the CESC, but then also the subset J should be determined using (T n ). Conclusions In this paper we derived two new SDP hierarchies from the Lovász theta function towards the stability number.The classical ESH from the literature starts from the SDP (T n+1 ) and adds ESCs.We introduced the new CESH starting from (T n ) and including ESCs.We proved that this new hierarchy has some same properties as the ESH.Moreover, we showed that the bounds based on the ESH are at least as good as those from the CESH -not only for including all ESCs of a certain order, but also for including only some of them. We also newly introduced SESCs, which are a more natural formulation of exactness for (T n ).Including them into (T n ) yields the new SESH.Even though SESCs are more intuitive, the bounds based on the CESH and the SESH coincide. In our computational results with an off-the-shelve interior point solver we typically obtain the best bounds with the fastest running times when using the ESH.However, for some instances using the CESH is beneficial. It would be interesting to derive a specialized solver for the CESH as it was done by Gaar and Rendl [14,15] for the ESH.They dualize the ESCs, use the bundle method and instead of solving a huge SDP with all ESCs, they iterate and solve (T n+1 ) with a modified objective function in each iteration.Since (T n ) has a smaller matrix order and fewer constraints, this approach presumably works even better for the CESH.Such a solver allows to compare the running times for the ESH and the CESH in a more sophisticated way. Another open question is the more precise relationship of the ESH and the CESH.In this paper we have shown that z C k (G) z E k (G) holds for all k ∈ {1, . . ., n}.It would be interesting to know if there is some constant ℓ 1 such that z E k (G) z C k+ℓ (G) holds for all graphs G and for all k ∈ {1, . . ., n}, so such that it suffices to add ℓ levels to the CESH to reach the quality of the ESH. Finally, it would be interesting to investigate which implications it has for the ESH and the CESH to induce the positive semidefiniteness constraint not for the whole matrix X, but only for a submatrix of X like it has been done in the recent work [2]. Table 3: The values of z E J (G) and z C J (G) for different graphs G with including all ESCs of order 0 (corresponds to ϑ(G)), 2, 3 and 4 and the running times to compute the values Table 1 : The value of z E Table 5 : The values of z E J (G) and z C J (G) for different graphs G and sets J = J E for subgraphs of order k for k ∈ {0, 2, 3, 4, 5, 6} Table 6 : The running times for the results of Table5 Table 8 : The running times for the results of Table7
12,135.2
2020-03-30T00:00:00.000
[ "Mathematics" ]
Concerted Mechanism of Carrier Dynamics in Laser‐Excited Fen/(MgO)m(001) Heterostructures from Real‐Time Time‐Dependent DFT Using real‐time time‐dependent density functional theory (RT‐TDDFT), the electronic response of a Fen/(MgO)m(001) (n=1,3,5 and m=3,5,7) metal/insulator heterostructure to an optical excitation is calculated, considering laser frequencies below, near, and above the bandgap of the insulator and two directions of polarization. The spatial redistribution of electronic charge after illumination shows a strong dependence on the frequency and polarization direction of the laser pulse with a similar pattern for all thicknesses. The comparison of the layer‐resolved changes in occupation of the ground‐state orbitals after optical excitation obtained for Fen/(MgO)m(001) and bulk Fe reveals the origin of excited carriers in the heterostructures: In the central and interface Fe layers carriers are excited from states in the vicinity of the Fermi‐level to the conduction band of MgO. Simultaneously, excitations take place from the valence band of MgO to Fe states above the Fermi‐level. This concerted mechanism allows for an effective bidirectional relocation of excited carriers between the metallic and insulating subsystems in heterostructures with a thickness of several nanometers, providing an effective accumulation of hot carriers in the insulating layers, even at photon energies in the vicinity and below the bandgap of bulk MgO. Introduction The microscopic understanding of non-equilibrium states created e.g., through femtosecond laser pulses has developed into a central topic in condensed matter research.Such nonequilibrium states can be very different from the ground state and encompass the realization of transient phases, which cannot be reached by conventional equilibrium methods, [1][2][3] some as evidenced by a correlation between the time-resolved changes in tr-XAS at the O-K edge and ultrafast electron diffraction experiments sensitive to the Fe-subsystem.However, the potential mechanisms allowing for a direct transfer of hot carriers between the Fe and MgO systems remain elusive.In this context, real-time time-dependent density functional theory (RT-TDDFT) calculations can render detailed insight into the redistribution of electronic charge and changes in occupation numbers in the heterostucture after photoexcitation and thus enable a thorough understanding of the excitation processes and the transfer of carriers in metal/oxide heterostructures.Recently, this approach was used to simulate the carrier dynamics in a minimal model Fe 1 /(MgO) 3 (001) heterostructure, containing a single Fe layer and three MgO layers, excited by an ultrashort laser pulse. [24,25]he results indicate a strong dependence of the excitation on the laser frequency and the polarization direction of the electric field.While the Fe layer is most efficiently addressed for frequencies below the bandgap of bulk MgO, the main excitation shifts to the MgO part for higher frequencies and out-of-plane polarization.Moreover, hybridized states at the interface play an essential role to mediate the energy transfer from Fe to MgO and vice versa.A concerted excitation mechanism was proposed, involving two simultaneous excitations via interface states: one from occupied states of the metal to the conduction band of the insulator and simultaneously, another from the top of the valence band of MgO into Fe states above the Fermi level. [25]This interfacebased mechanism allows reaching energy levels for the hot carriers that are separated by nearly twice the photon energy.Since Fe 1 /(MgO) 3 (001) contains only a single metal layer, the question arises about the robustness of this mechanism in more realistic heterostructures with an extended number of layers of both the metal and the insulator. Here, we focus on the role of Fe and MgO thickness on the propagation of excitations through the Fe/MgO(001) interface, induced by a laser pulse.To assess this effect, we simulate the explicit time evolution of Fe 3 /(MgO) 5 (001) and Fe 5 /(MgO) 7 (001) heterostructures using RT-TDDFT and compare the excitation pattern to the one in Fe 1 /(MgO) 3 (001), [24,25] as well as bulk Fe.We consider different laser frequencies below, around and higher than the bulk MgO bandgap and both in-and out-of-plane polarization. The paper is structured as follows: The computational details are presented in Section 2. Section 3.1 comprises a brief discussion of the ground-state geometry and electronic structure of the Fe 3 /(MgO) 5 (001) and Fe 5 /(MgO) 7 (001) heterostructures.In Section 3.2, we discuss the absorption spectra from the random phase approximation (RPA).The results of the TDDFT calculations in the real-time domain are presented in Section 3.3, which focuses on the time evolution of the charge density redistribution, and Section 3.4, where the excitation patterns extracted from the time-dependent occupation numbers in Fe n /(MgO) m (001) heterostructures are compared to the one in bulk Fe.Finally, Section 4 summarizes the results. Computational Details The structural optimization of the heterostructures (lattice parameters and internal positions) was performed with the VASP plane wave code (version 5.4.4), [26,27] using the generalized gradient approximation (GGA) of Perdew, Burke, and Ernzerhof (PBE) [28] for the exchange and correlation functional with a plane wave cutoff of 500 eV and a k-point grid of 16 × 16 × 6.The electonic structure, optical absorption spectra, and time-dependent properties were calculated with the ELK code, [29] using the previously optimized geometry.The ELK code is based on the all-electron full-potential linearized augmented-plane wave (FLAPW) method and implements timedependent DFT (TDDFT) in the real-time (RT) domain.For the exchange-correlation functional we have chosen the local (spin) density approximation, L(S)DA, in the parametrization of Perdew and Wang (PW92). [30]To model Fe 3 /(MgO) 5 (001) and Fe 5 /(MgO) 7 (001) heterostructures, we used muffin tin radii of 1.139, 1.164, and 0.855 Å for Fe, Mg, and O, respectively.To keep the numerical efforts tractable for the RT-TDDFT part, the plane wave cut-off parameter, RK max , was set to 7.This proved sufficient to obtain an electronic density of states (DOS) in good agreement with the VASP results.A 8 × 8 × 3k-mesh was used for the reciprocal space sampling, leading to a convergence of the total energy within 11 meV compared to a 22 × 22 × 10 mesh, while the magnetic moments of the Fe atoms converged within 0.016 B ∕ion and the charge within the corresponding muffintin (MT) spheres to 10 −3 e − ∕MT-sphere, which is significantly smaller than the time-dependent variation of this quantity.The convergence criterion for the electronic self-consistency cycle was a root-mean-square change of 10 −7 a.u. in the Kohn-Sham potential.The presented results are within the scalar-relativistic approximation, as calculations with explicit inclusion of spin-orbit coupling (SOC) did not lead to notable changes in the occupation numbers. For comparison to the RT-TDDFT results, the frequencydependent dielectric function was calculated in the framework of the random phase approximation in the limit of q → 0. For the RPA calculations, the reciprocal space is sampled by a 16 × 16 × 6 k-mesh.In the RT-TDDFT investigation, we simulate laser pulses with different laser frequencies but the same peak power density of S peak ≈ 5 × 10 12 Wcm −2 and constant duration.The monochromatic electromagnetic wave is folded with a Gaussian envelope with a constant full-width at half-maximum (FWHM) of 5.81 fs.The peak of the pulse is reached at t = 11.6 fs after the start of the simulation. [33][34][35] The electric field of the laser pulse, expressed by the vector potential, A ext (t), enters the KS equation as a velocity gauge. By solving the TDKS equations, we can obtain the timedependent electronic properties of a system such as timeresolved DOS (TDDOS), D (E, t), which maps the transient occupation numbers of the Kohn-Sham orbitals onto the ground-state DOS using the following scheme, see ref. [33]: where g ik (t) are the time-dependent and spin-resolved occupation numbers, defined as: Adv. Theory Simul.2024, 7, 2300319 here, n jk is the occupation number of the j th orbital and Φ i are the ground-state Kohn-Sham orbitals. [32] Geometry and Electronic Structure in the Ground State Before turning to the optical excitations in Fe 3 /(MgO) 5 (001) and Fe 5 /(MgO) 7 (001) heterostructures, we discuss shortly the ground state properties and compare them to Fe 1 /(MgO) 3 (001). [24,25]As shown in Figure 1, the layers of bcc Fe are rotated by 45 • with respect to the MgO lattice to achieve best lattice match where O is located apically to Fe.The structural properties are discussed in the Supporting Information [36] , see Table S1 (Supporting Information). The impact of the layer thickness on the electronic structure is assessed based on the layer-resolved density of states (LDOS) of Fe 1 /(MgO) 3 (001), Fe 3 /(MgO) 5 (001), and Fe 5 /(MgO) 7 (001), shown in Figure 2, using a Gaussian-type smearing of = 0.05 eV for all curves.A central observation is the narrowing of the Fe 3d band at the interface.The effect is most pronounced for Fe 1 /(MgO) 3 (001), while the bandwidth increases for the thicker heterostructures and the shape resembles the one of Fe bulk for the Fe layers further away from the interface toward the central layer.The smaller bandwidth leads to sharper features, for instance the peaks observed in the minority spin channel of Fe(IF) close to E F at about −0.5 and +0.2 eV in Fe 1 /(MgO) 3 (001), which exhibit a considerable hybridization with O 2p states of MgO(IF).These peaks split up further and broaden for Fe 3 /(MgO) 5 (001) and Fe 5 /(MgO) 7 (001).While the majority spin Fe 3d band is nearly fully occupied, the minority 3d band extends to +2.6 eV.As will be discussed below, this has a significant effect on the excitation pattern.Within the MgO part, the band offset between the MgO valence band and Fe 3d band is reduced with increasing thickness of MgO.The MgO valence band edge is shifted toward the Fermi level, from −3.7 eV for Fe 1 /(MgO) 3 (001) to about −3.1 eV for Fe 5 /(MgO) 7 (001).A common characteristic of all systems is the significant hybridization between d 3z 2 −r 2 orbitals of Fe(IF) with the p z orbitals of the apical O(IF), leading to a noticeable DOS in MgO(IF) e.g., at +0.8 eV in the majority spin channel (for a discussion of the orbital character of the respective states, see refs.[24, 25]).Important interface related features in the LDOS of MgO(IF) are observed around −2 eV in the majority channel and at +0.2 eV in the minority channel.These arise from the hybridization of d xz and d yz orbitals of Fe with p x and p y orbitals of apical O.As displayed in the insets showing the magnified LDOS in the MgO layers around the Fermi level, the interface states fade out exponentially in the deeper MgO layers away from the interface, as the thickness of MgO is increased, with the central layers approaching bulk MgO behavior. Absorption Spectra from RPA The imaginary part of the frequency-dependent dielectric tensor Im[()] characterizes the optical absorption properties of the heterostructure.In Figure 3, the in-plane ( xx () = yy (), upper panel) and out-of-plane ( zz (), lower panel) components of the imaginary part of the dielectric tensor calculated in the random phase approximation for Fe 1 /(MgO) 3 (001), Fe 3 /(MgO) 5 (001), and Fe 5 /(MgO) 7 (001) are compared to bulk Fe and MgO. A striking feature of Fe 1 /(MgO) 3 ( 001) is that the in-plane components are dominated by the metallic Fe layer with a large absorption in the low-energy region.On the contrary, the outof-plane components are determined by the insulating MgO part with significantly reduced absorption for frequencies below the bandgap of bulk MgO. [24]Here only some contributions from the hybridized states in the gap from the interface MgO layer are visible.Increasing the thickness of the Fe-slab to 3 and 5 layers adds significant weight to xx () for ℏ < 4eV compared to Fe 1 /(MgO) 3 (001) and also introduces a sizable outof-plane absorption in zz () in the same energy range.These differences in the absorption behavior of Fe 3 /(MgO) 5 (001) and Fe 5 /(MgO) 7 (001) w.r.t.Fe 1 /(MgO) 3 (001) -in particular, the reduced anisotropy -imply that also the time-resolved evolution will be impacted by the increasing thickness of Fe and MgO in the heterostructure. Real-Time Evolution of the Charge Distribution To understand the dynamics of carrier excitation and their transfer across the interface we analyze the electron density redistribution upon laser excitation obtained from RT-TDDFT.For a direct comparision with the minimum model system Fe 1 /(MgO) 3 (001), we applied laser pulses with the same photon energies and a constant peak power density of S peak ≈ 5 × 10 12 Wcm −2 as in our previous work. [24,25]The pulse duration is limited due to folding with a Gaussian envelope with a full-width at half-maximum (FWHM) of 5.81 fs, which results in a finite width of about 0.6 eV (FWHM) in the frequency domain.Different photon energies were considered ranging from below (ℏ = 1.63 eV and ℏ = 3.27 eV), in the order of (ℏ = 4.5 eV) and above (ℏ = 7.75 eV) the LDA bandgap of bulk MgO (4.64 eV [37,38] ).Pulses with electric field oriented along the x-(in-plane) or z-axis (out-of-plane) were taken into account.We concentrate here on the thickest Fe 5 /(MgO) 7 (001) heterostructure, the electron density redistribution for Fe 3 /(MgO) 5 (001) is presented in Figure S1 of the Supporting Information [36] and exhibits a qualitatively similar picture.Still smaller deformations of the charge clouds persist, which are also subject to small oscillating fluctuations.Figures S2 and S3 (Supporting Information [36] ) provide further detail on the magnitude of these fluctuations. In-plane polarized pulses with photon energies of ℏ = 1.63 eV and ℏ = 3.27 eV -both lower than the bulk MgO bandgap -deform mainly the charge clouds in the Fe layers with a weaker impact on the apical oxygen in MgO(IF), see Figure 4a,b.The largest charge redistribution is observed in the Fe(IF) and Fe(IF+1) layers, indicating a transfer from in-plane to out-ofplane 3d orbitals, whereas the excitation within Fe(C) appears to be smaller.At larger photon energies close to (4.5 eV) and beyond (7.75 eV) the bandgap of bulk MgO the charge redistribution around the Fe atoms decreases, whereas a notable charge depletion at the O sites throughout the MgO part emerges for 7.75 eV. For out-of-plane pulses with energies of ℏ = 1.63 eV and ℏ = 3.27 eV in Figure 4e,f the excitation is again mainly in The vertical lines denote the laser frequencies used in our RT-TDDFT calculations.For comparison, we also show respective spectra calculated for cubic bulk Fe and MgO. [24]e Fe part as for in-plane polarization, but significantly weaker, consistent with strong anisotropy in response observed for Fe 1 /(MgO) 3 (001). [24]In contrast, for ℏ = 7.75 eV (Figure 4h), which lies well beyond the bulk MgO bandgap, a qualitatively different picture of the excitation emerges with a strong depletion of charge from the Fe slab, especially at Fe(IF) and accumulation in the interstitial part and a particularly strong excitation throughout the MgO region with depletion at the oxygen sites and accumulation at the Mg sites, which were not involved in the preceding cases. In conclusion, the results for the thicker heterostructures Fe 3 /(MgO) 5 (001) (cf. [36]) and Fe 5 /(MgO) 7 (001) confirm the previously reported trends for Fe 1 /(MgO) 3 (001) concerning the frequency and polarization-dependent response of the system, [25] despite the qualitative changes in Im[()] for the thicker systems: Laser pulses with lower energy excite primarily the Fe slab, in particular for in-plane polarization of light.In turn, the MgO part is more efficiently addressed by photons with energies above the MgO bulk bandgap with out-of-plane polarization.In both cases, the central Fe layers exhibit a smaller excitation than the layers closer to the IF. Frequency-Dependence of the Excitation Pattern In a further step, we disentangle the role of different energy-, spin-, and layer-resolved excitation processes in promoting excited carriers through the heterostructure.For this, we calculated the change of spin-and layer-resolved TDDOS D (E, t) of Fe 3 /(MgO) 5 (001) and Fe 5 /(MgO) 7 (001) as a function of time t before, during and after the laser pulse.As reported previously for the minimal system Fe 1 /(MgO) 3 (001), the energy variation of ΔD (E, t) remains essentially constant in time after the decay of the laser pulse. [24,25]This is also the case for more realistic heterostructures, as demonstrated for Fe 5 /(MgO) 7 (001) in the Supporting Information. [36]For brevity, we therefore discuss in the following only changes in the partial TDDOS of Fe 5 /(MgO) 7 (001) at time t 1 = 20.2fs relative to its initial distribution at time t 0 = 0, i.e., ΔD (E) = ΔD (E, t 1 ) − ΔD (E, t 0 ).For better visibility, we plot the absolute value of ΔD (E) and distinguish depletion and accumulation of occupation of the corresponding orbitals by red and blue colors, as we did for the charge densities. Excitation Pattern of Bulk Fe While the minimal system Fe 1 /(MgO) 3 (001) [24,25] harbors only a single ultrathin Fe-layer, a bulk-like coordination of Fe is restored in thicker heterostructures, and we expect, in accordance with the previous sections, that the properties of the inner Fe layers approach bulk behavior.Therefore, we discuss briefly the response of bulk Fe to the same laser pulses as applied to Fe n /(MgO) m (001).Due to the cubic symmetry of -Fe no difference is expected for in-and out-of-plane polarization.Figure 5a shows a relatively broad excitation pattern in both spin channels for ℏ = 1.63 eV.In contrast to the nearly filled majority band, the minority spin 3d band is approximately half-filled and is thus more susceptible to excitations, due to the large number of initial and final states below and above the Fermi level.The energy range of the excitations spans from −5 to +4 eV, which is large compared to the energy of the pulse (1.63 eV).Even considering its relatively broad FWHM of 0.6 eV, one would expect that the excitation would not exceed substantially an interval of ±2 eV around the Fermi level.We ascribe the extended energy range to the particularly large absorption taking place at low energies, consistent with the steep rise in the imaginary part of the dielectric function in Figure 3.For the electric field strength of the applied pulses, this may induce visible non-linear effects in the absorption. With increasing photon energies, such non-linear features in the excitation pattern beyond ±ℏ around E F decrease substantially.In Figure 5b,c In Figure 5d, ℏ = 7.75 eV, we can identify once again distinct features in ΔD (E), which are separated by ℏ: In the majority channel at −0.5 and +7.3 eV, as well as around ±4 eV; in the minority channel small features at −6 and +2 eV, as well as around More importantly, we can observe extended energy intervals with a vanishing change in occupation and these are obviously -due to the metallic character of bcc-Fe -not related to regions with a vanishing static DOS.In particular, we can identify several regions where essentially no or little excitations take place (cf.orange arrows in Figure 5d): in the majority channel the interval between E F and 2 eV, as well as around −2 eV and a strong reduction around +6 eV.In the minority channel, we also find three extended regions with vanishing excitations: Between −5 and −3.5 eV, between ±1 eV and between +2 and +4 eV.For all photon energies, a region of low excitations in an interval around E F is observed, which is related to a deep minimum in the static minority spin DOS of bulk bcc Fe.Its width increases with increasing photon energy and becomes a complete gap for the largest photon energy.Such features are important for the later comparison to the excitations in the heterostructure.For bulk MgO, essentially no impact is expected for laser pulses below the bandgap.In our previous study [25] we also showed that even photon energies in the vicinity of the LDA bandgap of bulk MgO do not lead to a substantial redistribution in the occupation of states between the valence and the conduction band.However, an excitation with ℏ = 7.75 eV leads to one comparatively sharp transition, which removes carriers from the upper valence band edge to states located approximately 3 eV above the conduction band minimum. Excitations in Fe/MgO(001) Heterostructures Illuminated with In-Plane-Polarized Laser Light Having understood the excitation pattern in the bulk materials, we now turn to ΔD (E) in the heterostructures.In contrast to the cubic bulk systems, the polarization direction of the electric field introduces a significant anisotropy in the carrier dynamics. [24,25]e start with the in-plane polarized case, i. e., the orientation of the electric field vector parallel to the interface. Figure 6 displays the comparison of the layer-and spinresolved changes ΔD (E) for Fe 5 /(MgO) 7 (001) after illumination with ℏ = 1.63 eV, ℏ = 3.27 eV, ℏ = 4.5 eV, and ℏ = 7.75 eV.The majority of excitations are encountered in a window of ±ℏ around E F .Maxima in the static DOS provide an enhanced density of initial or final states for a direct excitation between occupied and unoccupied states, separated by ℏ.Therefore, the patterns in the panels correlate essentially with the features (peaks and valleys) in the respective static layer-resolved DOS in Figure 2. Changes in occupation substantially outside the window of ±ℏ around E F cannot be reached by a direct excitation process.These are particularly prominent for ℏ = 1.63 eV where we observe significant occupation of states at +3 eV and above as well as a depletion at and below −3 eV in both spin channels in Figure 6a.These should be be regarded as nonlinear effects such as multi-photon excitations or the temporary renormalization of the energies of initial and final states as a consequence of the changed interactions within the excited charge cloud, arising from a combination of large field strength and large absorption, in particular for low frequencies (cf. Figure 3).Such non-linear patterns are still present for ℏ = 3.27 eV in Figure 6b but decrease substantially for larger photon energies. A direct excitation (vertical black arrows in Figure 6a) is observed in the majority channel of the Fe layer from initial states below E F to the remainder of the unoccupied d-states at around +0.8 eV.This is still below the charge transfer gap, thus the propagation of these excitations into the MgO conduction band cannot be expected.However, due to the presence of interface states, we encounter a considerable population in MgO(IF), which decreases quickly with the distance from the interface, while a corresponding depletion of carriers below E F is not encountered in the MgO part.A full comparison of the excitation processes in Fe n /(MgO) m (001) is presented in the Supporting Information. [36]t suggests that the population at +0.8 eV becomes larger with increasing thickness, which indicates a transfer of excited carriers from the deeper Fe layers towards MgO(IF) and MgO(IF-1).A population of states in MgO(IF) directly above E F occurs also in the minority channel, but here it goes hand in hand with the depopulation of interface states below E F . Increasing the frequency to ℏ = 3.27 eV allows for excitations in the Fe subsystem to reach across the charge transfer gap: Fi-nal states between 2.5 and 3 eV (vertical black arrows) hybridize with the sp-orbitals of the MgO conduction band, see the vertical arrows in Figure 6b.We observe the transfer of carriers into MgO(IF) that does not decay toward MgO(C).Since direct excitations in bulk-like MgO are not possible for this photon energy, this indicates the propagation of carriers excited in the Fe subsystem or MgO(IF) into the conduction band of MgO. With increasing photon energies the picture becomes more defined.Excitations with ℏ = 4.5 eV are of particular interest since they correspond to the setup of recent optical-pump-x-ray-probe experiments on Fe/MgO heterostructures [23,39] and we expect to capture essential aspects of the experimental pump process in our modeling. We first concentrate on the minority channel of Fe 5 /(MgO) 7 (001), shown in the lower panels of Figure 6c.The photon energy is in the vicinity but still below the LDA bandgap of bulk MgO and thus, as we have shown earlier, [25] the direct excitations across the bulk MgO LDA bandgap is negligible.Therefore, the significant deoccupation seen at the top of the valence band in the MgO(C) layers in Figure 6c between −4 and −4.5 eV, which extends through all the MgO layers (horizontal purple arrows), can be associated with direct excitations to states slightly above E F in MgO(IF) and the Fe layers.Furthermore, the comparison with Figure 5c reveals that in bulk Fe excitations with final states up to 1 eV above E F are sparse and there is a sharp decrease in generated carriers below −4 eV.In contrast, we find in the minority channel of Fe(C) in Figure 6c a large occupation directly above the Fermi level, while the depletion below -4 eV is small.This indicates, that the carriers above E F do not result from a direct excitation within the Fe-layers with a bulk-like DOS.It rather suggests a dominating role of the interface layers MgO(IF) and Fe(IF) in the effective transfer of carriers from states below −4 eV in MgO(C) to states just above E F in Fe(C), as depicted by the diagonal purple arrows in Figure 6c.In the majority channel (upper panels), a similar process can be identified.Here, however, the excitation via hybridized states (diagonal purple arrow) stands in competition with direct excitations from the Fe-d band in all Fe layers (vertical black arrows), because we encounter for bulk Fe in Figure 5c, as well as for Fe(C) in Figure 6(c) that the amount of carrier depletion around -4 eV is of similar magnitude as the occupation at +1 eV. As indicated by the green arrows, we also observe the reverse process: Carriers slightly below the Fermi level in the Fe-subsystem are excited across the charge transfer gap to MgO conduction band states between +4 and +4.5 eV into the central layers of MgO.This allows the transfer of carriers from states in Fe(C) just below E F to conduction band states in MgO(C).As the photon energy is below the bandgap of MgO, the additional carriers in MgO cannot result from a direct excitation within the insulator.However, following the same argument given above, we cannot safely distinguish in this case, whether the excitation generating these carriers takes place in the bulk-like layers of Fe or at the interface.The comparison to ℏ = 3.27 eV reveals, that the MgO valence and conduction band states at −4 and +4 eV, respectively, are of particular importance for the transfer of carriers in both directions: For the lower photon energy in Figure 6b, the transfer between the subsystems is reduced since the relevant levels at ±4 eV cannot be reached by a direct excitation in the linear regime.As the changes in occupation for these transitions are consistent with resonant excitations and rather large, we may safely exclude a significant non-linear contribution here.Non-linear contributions are of higher order, i. e., of lower magnitude compared to direct resonant excitations (interface and bulk), in particular for the frequencies in the order and above the MgO bandgap as corroborated by our previous analysis of the dependence of the excitation pattern on the pulse intensity in the Supporting Information of ref. [25] (Figures S6 and S7). For ℏ = 4.5 eV the transfer of carriers thus works efficiently in both directions simultaneously.This avoids a significant net accumulation of charge in one of the subsystems, which would come with a penalty from Coulomb interaction.A similar concerted process has been proposed previously for the minimal het-erostructure Fe 1 /(MgO) 3 (001). [25]Our present work thus proves, that this simultaneous bi-directional relocation of carriers is active also in heterostructures with thicker slabs of Fe and MgO, which are easier to realize in experiment.The significant spinsplitting of the Fe-d states affects the hybridization with the MgO conduction band states, which are relevant for the propagation of the carriers.The hybridization is enhanced here for the majority spin channel, leading to a substantial depletion of states in the valence band of MgO(C) compared to the minority channel and thus to spin-dependent changes in |ΔD (E)| even for the (nonmagnetic) insulating component. For laser pulses, which are above the bulk MgO LDA bandgap and in the order of or beyond the bandwidth of the Fe-d band, direct excitations in the Fe-subsystem are diminished compared to 001) heterostructure.Same colors for positive and negative signs as in Figure 6.The upper panels refer to the majority spin channels and the lower panels to the minority spin channels.Black, purple, and green arrows refer to particular transitions, which are discussed in the text. the lower photon energies, since now either initial or final states lie outside the range of the Fe-d band.On the other hand, the bidirectional transfer of carriers is still effective for ℏ = 7.75 eV as shown in Figure 6d.Here, transitions take place from −5.5 and −7 eV in MgO to +2.0 eV in the majority spin channel of Fe, as indicated by the diagonal purple arrows.It is important to note, that for this process, there is no corresponding excitation in bulk-Fe, cf.orange arrows in Figure 5d, and therefore this excitation must involve interface states.In the reverse direction (green arrows in Figure 6) carriers from below E F down to −2 eV are excited to unoccupied states around +6 eV and above.This may take place entirely in the metallic subsystem, but the occupation of the final states extends deeply into the conduction band of the MgO subsystem.Similarly, in the majority spin channel carriers are transferred from MgO states at −7 eV to Fe states up to +1 eV (purple arrows).Simultaneously, Fe-states from −2 or −1 eV below E F are depleted in favor of MgO states between +5.5 and +7 eV, as illustrated by the green arrows.Overall, the difference in excitation pattern between the two spin channels is significantly reduced at this energy.Additionally, we observe direct excitations in the MgO layers from states below the Fermi level at −3.5 to −4.0 eV to conduction band states around +4 eV, indicated by the black vertical arrows in Figure 6d. It is important to note that the above-sketched mechanisms are essentially independent of the thickness of the layers; the qualitative picture, which was already obtained for the smallest system size turns out to be effective also in systems with much larger layer thicknesses.This is corroborated by a detailed comparison of the excitation pattern of Fe 1 /(MgO) 3 (001), Fe 3 /(MgO) 5 (001), and Fe 5 /(MgO) 7 (001) heterostructures for all frequencies presented in the Supporting Information. [36] Excitations in Fe/MgO(001) for Out-of-Plane Polarization of the Electric Field Some important distinctions occur in ΔD (E) for polarization of the electric field perpendicular to the interface planes, that are consistent with the conclusions drawn from the dielectric constants presented in Section 3.2 and the transient changes in the charge distribution discussed in Section 3.3: A particularly strong reduction of the scale of ΔD (E) with respect to in-plane polarized light occurs for the lowest frequencies, whereas the picture reverses for photon energies above the bandgap of bulk MgO.The latter is demonstrated in Figure 7b, where the amplitude of the features in ΔD (E) increases by more than a factor of two for ℏ = 7.75 eV.On the other hand, the magnitude of the light induced changes in occupation are rather similar for ℏ = 4.5 eV for both orientations of the electric field, see Figure 7a.In contrast to the in-plane polarization shown in Figure 6c, for ℏ = 4.5 eV we observe in Figure 7a, a significant occupation of minority states after the pulse between +2 and +5.5 eV in the MgO conduction band, similar to the majority channel.This is accompanied by an enhanced depletion of states at the upper valence band edge of the central MgO layer in both spin channels.Besides the direct excitations for ℏ = 7.75 eV in Figure 7b, we find in the minority spin channel a depletion around −7 eV in the valence band of MgO concomitant with accumulation of states right above E F in the Fe subsystem (purple arrows).This process is significantly enhanced as compared to Figure 6d.Furthermore, we see in Figure 7b an enhanced excitation of MgO conduction band states around +5 eV in combination with an enhanced depletion in the Fe subsystem at −2.5 eV (green arrows), which is clearly above the valence band edge of MgO. Conclusion We systematically explored carrier dynamics and excitation patterns in Fe n /(MgO) m (001) metal-insulator heterostructures (n = 3, 5 and m = 5, 7) excited by a laser pulse in the optical to ultraviolet range corresponding to photon energies below, around, and above the bandgap of bulk MgO.The polarization of the electric field was selected in-and out-of-plane with respect to the stacking of the heterostructures.The response to optical excitations was characterized in terms of the dielectric tensor calculated in the random phase approximation.We found that cross-plane components of the dielectric tensor increase below the DFT bandgap of bulk MgO with increasing thickness of the metallic Fe part, which diminishes the anisotropic response to pulses with in-plane and cross-plane polarization of the electric field, observed previously in Fe 1 /(MgO) 3 (001). [25]ubsequent real-time TDDFT simulations provided insight into the transfer of carriers within and between the Fe and MgO subsystems.The analysis was carried out in terms of the electron density redistribution and layer-resolved changes in the occupation numbers before and after the laser pulse.The redistribution of electronic charge shows a significant anisotropy and a qualitatively similar picture for both Fe 3 /(MgO) 5 (001), and Fe 5 /(MgO) 7 (001): The Fe-layer is efficiently addressed at low frequencies by in-plane polarized light, whereas for frequencies higher than the bulk MgO bandgap, we found a particularly large response of the MgO layers to cross-plane polarized light. The time-resolved changes in the energy-resolved occupation numbers of the Kohn-Sham orbitals are consistent with the predictions from the dielectric function, but yield additional insight into the spatial resolution with respect to the layer and the energy of the involved orbitals.For frequencies in the order of and above the LDA bulk MgO bandgap, we observed a simultaneous charge transfer from the valence band of MgO to Fe states above E F and from Fe d-states below the Fermi level to the conduction band of MgO.This concerted process is relevant for both polarization directions and occurs in all investigated heterostructures.It results in the accumulation of hot carriers in the conduction band of the MgO subsystem, which are not encountered in the bulk of the insulator after photo-excitation.In contrast to the excitation in the bulk systems, which is confined to particular energy ranges, a much richer pattern of redistributed occupation is observed in the heterostructures, which is largely independent of the layer thickness.Since changes in the occupation are present in the central (bulk) layers of the heterostructures, but not observed in the (separate) bulk systems, we can conclude that in the heterostructure, the hybridized interface states play the dominant role in the relocation of charge carriers between the subsystems.These findings confirm that the concerted mechanism of heat transfer initially sketched in ref. [25] for a minimum model system remains robust in realistic heterostructures involving several layers of both Fe and MgO, away from the interface.Our findings suggest furthermore that a careful tuning of the photon energy and polarization direction may allow to select the transfer between favorably oriented orbitals via particular interface states even in extended heterostructures with a layer thickness in the range of one or several nanometers. Hot carriers generated by optical absorption processes play an important role in photocatalysis or harvesting of solar energy (e. g., refs.[40, 41]).Such applications often require the separation of positive and negative charge carriers.The concerted mechanism, presented here, results in the simultaneous transfer of both carrier types and thus the charge remains balanced.On the other hand, the energy of these carriers is substantially different in the two subsystems.This implies a transfer of energyor rather heat, as the energy may dissipate very quickly within a few 10-100 fs.The excited carriers may, in principle, be detected in state-of the art optical pump, x-ray probe experiments as presented in Rothenbach et al. [23], which requires further improvements with respect to time-and energy-resolution.Nevertheless, the comprehensive understanding of the conditions under which optically excited carriers propagate into and possibly through the interface might open further opportunities to achieve control of the transfer of excitations in other classes of metal-insulator heterostructures. Figure 4 illustrates the change Δ(r, t) = (r, t) − (r, 0) in the spatially and time-resolved charge density (r, t) at t = 20.2fs, i. e., after the application of the laser pulse, w.r.t.t = 0 in the Fe 5 /(MgO) 7 (001) heterostructure for different frequencies and polarization directions.Animations of the full temporal evolution of Δ(r, t) indicate horizontal/vertical fluctuations of the elec-tronic clouds around the atomic positions for the in-plane/outof-plane polarization of the electric field during the pulse.At t = 20.2fs (and beyond), the electric field has decayed completely. Figure 3 . Figure 3. Imaginary part of the dielectric tensor Im[ ij ()] of Fe 1 /(MgO) 3 (001), Fe 3 /(MgO) 5 (001), and Fe 5 /(MgO) 7 (001) as well as the bulk materials as a function of energy: a) in-plane (Im[ xx ()]=Im[ yy ()]) and b) out-of-plane Im[ zz ()] components, respectively, calculated within RPA.The spectra are shifted vertically by a constant value of 5 for clarity.The vertical lines denote the laser frequencies used in our RT-TDDFT calculations.For comparison, we also show respective spectra calculated for cubic bulk Fe and MgO.[24] , majority channel features are located close to E F and around ±2.5 eV for both photon energies, ℏ = 3.27 and 4.5 eV.For ℏ = 4.5 eV additional features are found at +4 and −4.5 eV.In the minority channel in Figure 5b features showing depletion at −2.5 and −1.5 eV and accumulation at +0.8 and +1.8 eV occur.For the larger frequency in Figure 5c, we can identify an analogous relation between features centered around −2.8 and −1 eV below E F and +1.8 and +3.5 eV above. Figure 5 . Figure 5. Spin-resolved changes in the time-dependent DOS of bulk Fe at t = 20.2fs (after the decay of the laser pulse) w.r.t.t = 0, i. e., ΔD (E) = D (E, 20.2fs) − D (E, 0), for laser pulses with frequencies of ℏ = 1.63 eV, ℏ = 3.27 eV, ℏ = 4.5 eV, and ℏ = 7.75 eV and a peak power density of S peak ≈ 5 × 10 12 Wcm −2 .For better visibility, we plot the absolute value, the sign is indicated by the color.Blue: positive sign, accumulation of occupation; red: negative sign, depletion of occupation.The upper panels refer to the majority spin channels and the lower panels to the minority spin channels.The orange arrows denote features discussed in the text. Figure 6 . Figure 6.Spin-and layer-resolved changes in the TDDOS at t = 20.2fs (after the decay of the laser pulse) w.r.t.t = 0, ΔD (E) = D (E, 20.2fs) − D (E, 0), for in-plane polarized laser pulses with frequency of a) ℏ = 1.63 eV, b) ℏ = 3.27 eV, c) ℏ = 4.5 eV, and d) ℏ = 7.75 eV applied to the Fe 5 /(MgO) 7 (001) heterostructure.For better visibility we plot the absolute value |ΔD (E)|, the sign is indicated by the color.Blue: positive sign, accumulation of occupation; Red: negative sign, depletion of occupation.The upper panels refer to the majority spin channels and the lower panels to the minority spin channels.Black, purple, and green arrows refer to particular transitions, which are discussed in the text. Figure 7 . Figure 7.The difference ΔD (E) of the layer-resolved TDDOS between t = 0 and t = 20.2fs (after the decay of the laser pulse), for out-of-plane laser pulses with frequencies of a) ℏ = 4.5 eV and b) ℏ = 7.75 eV applyed to the Fe 5 /(MgO) 7 (001) heterostructure.Same colors for positive and negative signs as in Figure6.The upper panels refer to the majority spin channels and the lower panels to the minority spin channels.Black, purple, and green arrows refer to particular transitions, which are discussed in the text.
9,409.4
2023-10-25T00:00:00.000
[ "Materials Science", "Physics" ]
2,6-Bis[1-(2-isopropylphenylimino)ethyl]pyridine The title compound, C27H31N3, has E substitution at each imine double bond where the two N atoms adopt a trans–trans relationship. The benzene rings are twisted out of the mean plane of the pyridine ring; the mean planes of the aromatic groups are rotated by 63.0 (1) and 72.58 (8)°. The crystal structure is sustained mainly by C—H⋯π and hydrophobic methyl–methyl interactions. Many reports have appared in the literature concerning the effects (sterically and/or electronic) of ligand modifications, to find a structure-activity relationships. The crystal structure of different 2,6-bis(arylimino)pyridine ligands and their transition metal complexes offer the possibilty to compare directly structural parameters. Here we report the synthesis and crystal structure of the title compound, (I), (Fig. 1). The molecule adopts a nonplanar conformation in which an E configuration around each C?N imine group is observed, likewise the two N atoms display a trans-trans relationship. The conformation of the system N-N-N system is of course different in each case. In general, X-ray structures of bis(arylimino)pyridines reveal that in the solid state the imino nitrogen atoms prefer to be disposed trans with respect to the central pyridine nitrogen (Mentes, et al. 2001;Huang et al., 2006) in order to minimize the interaction between the nitrogen lone pairs. The phenyl rings in (I) are twisted out of the mean plane of the pyridine ring, the mean planes of C8-C13 and C19-C24 being rotated by 63.0 (1)° and 72.58 (8)°, respectively. This molecular conformation is determined by the formation of pairs of intramolecular C-H···N hydrogen bonds, involving methyl groups with the N of the pyridine ring and isopropyl groups with imine groups with a range of distances C···N = 2.799 (3)-2.892 (4) Å (Fig. 2). These interactions lead to the formation of five-membered rings described by graph-set simbol S(5) (Bernstein et al., 1995). The crystal structure of (I) consists of dimers linked by self-complementary C-H···π interactions related by an inversion centre C15···Cg1 = 3.757 Å; were Cg1 is the centroid of the N1,C1-C5 ring (Fig. 2). Neighbouring dimers are connected through additional C-H···π between phenyl rings (Fig. 3), generating supramolecular sheets parallel to the c axis. Details of geometrical parameters of these hydrogen bonding interactions are summarized in Table 2. Finally, the stacking of adjacent sheets is sustained by hydrophobic methyl-methyl interactions along the a axis (Fig. 4). S2. Experimental The tile compound was synthetized by condensation of 2,6-diacetylpiridine (1.63 g, 10 mmol) with 2-iso-propylaniline (2.74 g, 20.3 mmol) in 25 ml dry methanol and five drops of formic acid. The solution was refluxed for 18 h. Upon slow cooling to room temperature and overnight to 273 K. Yellow prisms of (I) were obtained and filtered with a yield 75%. S3. Refinement All H atoms bound to carbon were included in calculated positions (C-H = 0.93-096 Å) and refined as riding with U iso (H) = 1.2U eq (C) or 1.5U eq (methyl C). Figure 1 Molecular structure of (I) with displacement ellipsoids drawn at the 30% probability level (H atoms omitted for clarity). where P = (F o 2 + 2F c 2 )/3 (Δ/σ) max < 0.001 Δρ max = 0.24 e Å −3 Δρ min = −0.18 e Å −3 Special details Geometry. All e.s.d.'s (except the e.s.d. in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell e.s.d.'s are taken into account individually in the estimation of e.s.d.'s in distances, angles and torsion angles; correlations between e.s.d.'s in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell e.s.d.'s is used for estimating e.s.d.'s involving l.s. planes. Refinement. Refinement of F 2 against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The threshold expression of F 2 > σ(F 2 ) is used only for calculating R-factors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger.
975.8
2007-12-06T00:00:00.000
[ "Chemistry" ]
Deciphering a critical role of uterine epithelial SHP2 in parturition initiation at single cell resolution The timely onset of female parturition is a critical determinant for pregnancy success. The highly heterogenous maternal decidua has been increasingly recognized as a vital factor in setting the timing of labor. Despite the cell type specific roles in parturition, the role of the uterine epithelium in the decidua remains poorly understood. This study uncovers the critical role of epithelial SHP2 in parturition initiation via COX1 and COX2 derived PGF2α leveraging epithelial specific Shp2 knockout mice, whose disruption contributes to delayed parturition initiation, dystocia and fetal deaths. Additionally, we also show that there are distinct types of epithelium in the decidua approaching parturition at single cell resolution accompanied with profound epithelium reformation via proliferation. Meanwhile, the epithelium maintains the microenvironment by communicating with stromal cells and macrophages. The epithelial microenvironment is maintained by a close interaction among epithelial, stromal and macrophage cells of uterine stromal cells. In brief, this study provides a previously unappreciated role of the epithelium in parturition preparation and sheds lights on the prevention of preterm birth. The sole rationale for conducting an in-depth analysis of Shp2 seems to hinge on the enrichment of HB-EGF/EGFR signaling in the luminal compared to the glandular epithelium.There is little justification for the resultant choice of Shp2.Moreover, the conditional knockout strategy applied does not distinguish between luminal and glandular epithelium, so this does not make sense. As stated by the authors, the re-formation of the uterine epithelial lining at, or preceding, parturition remains poorly understood.As this is the major focus of this study, a detailed histological analysis of implantation sites including their surrounding myometrial tissue should be conducted and included in the main part to depict when and how the epithelium reforms.It is stated that this process is initiated at the inter-implantation sites, which then should harbour leading edges of highly proliferative epithelial cells.This must be demonstrated.It should also be shown if any glandular epithelium remains at E19 in the thin muscular tissue layer overlying the decidua.What is the squamous epithelium, and where does it come from (e.g., Fig. 1e)? The text clearly states that the tissue for single cell analysis entailed the uterus after removal of decidua, placenta and fetus.It is confusing that the diagrams also include decidua.The myometrial tissue (i.e. the tissue overlying the decidual face of implantation sites) at E19 is a very thin layer.There are barely any glands in this thin strip of muscle.Even at E16, as shown in the supplement, there is basically no gland in the tissues analysed (the picture only shows one small gland at the periphery which appears to have been chosen for the main figure).Hence it is unclear where all of the epithelial cells, specifically the glandular epithelial cells, should come from that were identified in the single cell clusters.Again, a histological depiction of the entire structure at low magnification and zoomed in areas of interest are instrumental to gain a proper understanding of tissue morphology and morphogenetic processes around parturition. As an example, in Fig. 3a, does this really depict the uterus overlying the decidua, or rather an interimplantation site?It is surprising that there should be such extensive stretches of epithelial cells present in the tissue covering the implantation site. In line with this question around the proportional representation of cells in the scRNA-seq analyses, the proportion of epithelial cells in Fig. 4d and Fig. 1b is highly discrepant, Fig. 4 is more in keeping with expectation as for cell proportions.What is the difference?Also, what is the cytokeratin-positive compact zone which, hence, is of fetal origin? The Ltf-Cre model induces gene deletion in the uterine epithelium at E4.Many of the data shown argue in favor of a role for Shp2 in early epithelial-stromal interactions, that lead to later differences in the decidua, as shown.They do not, however, relate to the role of SHP2 in parturition.These developmentally dynamic roles of SHP2 remain unresolved and inconclusive. It is unclear how the few glands overlying the decidua at E19 should cause such major differences in PGF2a production that result in far-ranging consequences on systemic maternal hormone levels.Also, there is no obvious spatial relationship between the PGF2a-production by glands and the changes in COX1 and COX2 expression.How can this be explained? Leading on from this question, how tight is the Ltf-Cre induced conditional ablation of Shp2?Does it affect the squamous epithelial layer shown in Fig. 1e?Are other estrogen-responsive epithelia outside the uterus affected? The lack of epithelial organoid growth from uteri at E4 does not relate to a role for Shp2 in parturition. There are multiple occasions of mix-ups in the figures vs legends between E16 and E19 that confuse the results. There is no decidualization at labor.This process is completed far earlier.This statement in the text (line 367) encapsulates the major pitfalls of this manuscript that mixes up processes that occur at very different developmental time points. The link to TLR4 mediated inflammatory responses is poorly developed and perhaps unnecessary for a manuscript focussed on Shp2 function in the induction of labor. How does uterine epithelial-specific ablation of Shp2 affect the ovary?This link is poorly evolved. The manuscript confuses insights from the mouse model with those in humans.Clear distinctions should be made in all statements that relate to either one or the other.For example, the term 'uterine milk' is usually only applied in humans.Similarly, 'prostaglandin activity in amniotic epithelium as an important initiator of labor at term' refers to the situation in humans. The manuscript needs major contextual editing for language. Additional points: The manuscript requires major improvements on the following points: 1.The inconsistent use of "mouse uteri" throughout the text is distracting.Please, substitute "mice uteri" and "mice uterus" to "mouse uteri".The manuscript by Liu and coworkers describes the endometrial cell types and impact of SHP2 in the endometrial epithelium in the non labor (day 16) and labor (day 19) mouse uterus.This manuscript gives important data on the cell types and potential communication pathways in the mouse uterus during this period.It also shows transcriptomic changes in these cell types at these stages.The manuscript then goes to investigate the role of SHP2 in the endometrial epithelium by conditional ablation of this gene using the LtfCre model.This analysis shows that loss of SHP2 in the epithelium results in a delay in parturition.The delay is in part due to a decrease in prostaglandin synthesis and this delay can be rescued by Prostaglandin treatment..This is an important manuscript because it not only defines the endometrial cell types during pregnancy and labor but also shows a critical role for the endometrial epithelium during this process.There are two minor weaknesses in this manuscript 1.The LPS experiment is not well developed.It states that LPS was given on Day 4. Why was this day chosen and not later in pregnancy.What is the significance of the RNA seq in this approach since the actions of SHP2 are later in pregnancy.This data should be removed. Suppl 2. What is the mechanism of SPH2 regulation of parturition.IN the discussion it is suggested that it may involve the regulation of P4 signaling.This should be evalusted at least bioinformatically by comparing the gene expression changes to know genes identified by transcriptomics or cistromics to PGR signaling. All this is an important and novel finding and will add significant new avenues for research. General The manuscript authored by M. Liu and associates describes experiments to establish the factors relevant to the differences in gene expression between the late gestation and the periparturient uteri in the mouse.The methods employed include scRNAseq and consequent bioinformatic analysis, ChatCell to determine the cell interactions in the uterus around parturition and conditional depletion of a gene, Shp2 in the uterine epithelium.The results demonstrate that deletion of this gene does not affect embryo implantation or establishment of the deciduum, but that it has an effect on the late pregnancy changes in this tissue that normally accompany the birth of the litter.This is a novel and noteworthy finding, as it opens a new horizon in exploration of parturition.A strength of the manuscript is the cogent discussion of this new and important information.In terms of the methods employed, it would be useful to have more information of the quality control aspects of the scRNA analysis. This manuscript has much to recommend it.It is a novel exploration resulting in the discovery of a previously unknown mechanism, i.e. the role of the uterine epithelium in the process of parturition.The single cell global gene analysis appears to have been appropriately conducted and the presence of scatterplots in the figures is indicative of the high quality of the data.The manuscript will require extensive language editing, as there are many, many syntax, grammar and spelling errors that render it difficult to read and detract from the overall presentation.Specific 1. Line 32: It would seem that decidualization, a process that begins at implantation on day 5 of gestation was normal, not aberrant.The process that was disturbed was parturition in this study, thus the abstract is misleading.This requires clarification. 2. Line 119: It is stated that the signature of epithelial cells in labor is portrayed.Although parturition in the mouse usually occurs on day 19 post mating, there is some variability.How was it determined that the animals were in labor at the time of collection of the uterine tissues? 3. Figure 1F is not labelled.3C: There is no information about whether these differences are statistically significant.5. Lines 188 et seq: Was it determined whether Shp2 was depleted in the ovary?6. Figure 3F: The immunohistochemical image purported to indicate the overexpression of StAR is not particularly convincing.3F: There is a significant reduction in the expression of Akr1c18 in the corpora lutea of the d/d model attributed to reduced expression of Ptgs2.Is it possible that Akr1c18 is a direct target of Shp2? 8. Figure 7 (summary figure) is not an easily understandable resumé of the investigation.It should be improved, or a better and more comprehensive figure legend be provided. NCOMMS-23-09237 My co-authors join me in expressing our sincere appreciation to the reviewers' thoughtful comments.We have critically reviewed each comment and added new experiments and results as suggested by the reviewers.Our responses are elaborated below.The reviewers' comments are followed by our responses highlighted in blue.All changes in the revised manuscript are marked by blue. Reviewer #1 Parts of the data are interesting and novel, specifically the role of uterine epithelial Shp2 on parturition timing.However, as a whole the study appears very disjointed.The rationale for the focus on Shp2 as a result of identifying "EGFR/MAPK signaling" enriched in the epithelial compartment of scRNA-seq data is ill-justified.The subsequent analyses jump between an early role for Shp2 in uterine epithelium and epithelial-stromal interactions to that in late-gestion for the initiation of parturition.Analyses on cell-cell interactions and inflammatory signal mediators lack in depth.Overall, this study contains some interesting results that, after restructuring, would be better suited for a more specialized journal. The sole rationale for conducting an in-depth analysis of Shp2 seems to hinge on the enrichment of HB-EGF/EGFR signaling in the luminal compared to the glandular epithelium. There is little justification for the resultant choice of Shp2.Moreover, the conditional knockout strategy applied does not distinguish between luminal and glandular epithelium, so this does not make sense. Thanks very much for this suggestive comment.There are four reasons for conducting the in-depth analysis of Shp2: (1) MAPK signaling pathway is enriched in epithelium; (2) this enrichment is increased in day 19 uterus compared with day 16; (3) there was low organoid formation efficiency derived from Shp2 deficient epithelial cells.(4) SHP2 is a critical nonreceptor protein tyrosine phosphatase to activate Ras/Mitogen-activated protein kinase (MAPK) pathway (PMID: 17993263).Hence, we constructed a genetic mouse model harboring uterine epithelium specific deletion of Shp2 by crossing Shp2-loxp mice (Shp2 f/f ) with Ltf-Cre mice to further dissect out the physiological role of SHP2 in both luminal and glandular epithelium in parturition (Lines 173-179). Currently, there is no luminal or glandular epithelium specific Cre available yet due to limited understanding between these two types of epithelia.Although FOXA2 is highly expressed in glands, Foxa2-driven Cre recombinase express in node, notochord, floorplate, and endoderm as well as endoderm-derived organs including lung, liver, pancreas, and gastrointestinal tract throughout development (PMID: 18798232), which largely limit the application of this Cre in female reproductive tract.In additional, our scRNA-Seq result unravel several glandular specific genes, it might be possible to dissect out the roles of different epithelium in parturition initiation by creating new tool mice driven by these genes, which deserve further investigation.Additionally, our results indicate that the receptors of growth factor only expressed in luminal epithelium but not glands, the application of Ltf-Cre is feasible to illustrate the role of Shp2 in epithelium cells.We revised the manuscript to avoid the disjointedness between HB-EGF/EGFR signaling and SHP2 in epithelium.The limitation of LTF-Cre to distinguish luminal and glandular epithelium is also discussed (Lines 366-370). To illustrate the role of Shp2 in organoid formation the late gestation, we separated epithelial cell on day 19 to detect the efficiency of organoid formation.The result suggested that organoid formation is obviously disrupted from day 19 in the absence of epithelial SHP2 (Lines 179-182).As stated by the authors, the re-formation of the uterine epithelial lining at, or preceding, parturition remains poorly understood.As this is the major focus of this study, a detailed histological analysis of implantation sites including their surrounding myometrial tissue should be conducted and included in the main part to depict when and how the epithelium reforms.It is stated that this process is initiated at the inter-implantation sites, which then should harbour leading edges of highly proliferative epithelial cells.This must be demonstrated.It should also be shown if any glandular epithelium remains at E19 in the thin muscular tissue layer overlying the decidua.What is the squamous epithelium, and where does it come from (e.g., Fig. 1e)? Thanks for this concern very much.The observation of epithelium reformation based on histological analysis during pregnancy has been reported before (Fig. 2, PMID: 6624690, ref 15, Line 73).The epithelium reformation is started around day 10 at AM site, then at both M and AM sites (indicated by red arrows).Our immunostaining of CK8 in days 4,8 and 10 also confirms the distribution of epithelium (Fig. 3, 4). To detect the presence of proliferation epithelium during pregnancy, KI67 was costained with CK8 in days 12 and 14 uterus.The results show that KI67 positive cells mainly locate at the end of leading edges as marked by white dashed box (Fig. 4).Based on this result, it is inappropriate to state that "the epithelium undergoes regeneration from interimplantation sites".This sentence is changed to "From day 10 of pregnancy, the epithelium undergoes regeneration to wrap the fetus" (Lines 330-331) To detect the presence of glandular epithelium at day 19, we performed HE and FOXA2 staining in day 19 uterus.The result show that there is FOXA2 positive glandular epithelium in inter-implantation site (Fig. 5).Based on previous studies (PMID: 31074826; PMID: 29426931), glands mainly distribute in the M site of uterus, which is largely different from human.Our previous work based on tissue clearing and 3D imaging uncover the distribution and dynamic change of glands during early pregnancy (Fig. 5-8; PMID: 29426931).The results also show that glands are mainly existed in AM site.The epithelium remains at E19 in the thin muscular tissue layer overlying the decidua is primary luminal epithelium. It is a very good question about the origination of the squamous epithelium.There are few functional studies about the role of squamous epithelium during pregnancy.Previous study supposes that the epithelium covering decidualized stromal cells (close to fetus) is squamous epithelium, while the epithelium covering non-decidualized stromal cells is columnar epithelium (PMID: 6624690).The underlying molecular mechanism regulating the development of these two different cell types deserve further study, which is out of the scope of current study.The text clearly states that the tissue for single cell analysis entailed the uterus after removal of decidua, placenta and fetus.It is confusing that the diagrams also include decidua. The myometrial tissue (i.e. the tissue overlying the decidual face of implantation sites) at E19 is a very thin layer.There are barely any glands in this thin strip of muscle.Even at E16, as shown in the supplement, there is basically no gland in the tissues analysed (the picture only shows one small gland at the periphery which appears to have been chosen for the main figure).Hence it is unclear where all of the epithelial cells, specifically the glandular epithelial cells, should come from that were identified in the single cell clusters. Again, a histological depiction of the entire structure at low magnification and zoomed in areas of interest are instrumental to gain a proper understanding of tissue morphology and morphogenetic processes around parturition. Stroma cells (including decidualized and undecidualized stromal cells) are the most abundant cell type in the maternal decidua.To increase the enrichment of epithelium, stromal cells are removed manually as much as possible, but it is difficult to remove stromal cells thoroughly due to the tightly connection between stromal and other cell types. As illustrated by our previously work applying whole uterine staining, tissue clearing and two-photo microscope imaging, the glands are evenly distributed in the uterus before embryo implantation, while after embryo implantation, the glands in the implantation site are extended and pushed out due to the rapid growth of decidua and embryo on day 6 and day 8 (PMID: 29426931).With regard to the glands in the inter-implantation site, they distribute crowed ascribed to the growing embryo of both sides (Fig. 6-8).Our histology result also showed that there was array of FOXA2 positive glands in inter-implantation site on day 19 (Fig. 5).At the implantation site, the areas of interest are the leading edge covering the placenta and the other end connect with inter-implantation site (Fig. 9).As an example, in Fig. 3a, does this really depict the uterus overlying the decidua, or rather an inter-implantation site?It is surprising that there should be such extensive stretches of epithelial cells present in the tissue covering the implantation site. As illustrated above (Fig. 5 and 9), the leading edge of the epithelium at implantation site cover the decidua and placenta, while the epithelium in inter-implantation is very different.The upper white dashed box of Fig. 9 represents the area in Fig. 3A in the manuscript. In line with this question around the proportional representation of cells in the scRNAseq analyses, the proportion of epithelial cells in Fig. 4d and Fig. 1b Thanks for this concern.The epithelial-stromal interactions were mainly investigated in days 16 and day 19 based on scRNA-Seq.The data in early stage of pregnancy (day 4) provide evidence that embryo implantation and decidualization were normal in both genotypes, excluding the possibility of delayed parturition arise from the defect of embryo implantation or decidualization.The major topic of this study is to unravel the role of epithelium in parturition.We first depict that there is extensive interaction of epithelium and other cell types approaching parturition.Then, our study further uncovers the underlying mechanism of epithelium in parturition via SHP2.In a word, this study proves evidence that the developmentally dynamic roles of SHP2 in delayed parturition at the absence of epithelial SHP2 is limited.And the developmental role of epithelial SHP2 in other aspects apart from parturition deserves further investigation. It is unclear how the few glands overlying the decidua at E19 should cause such major differences in PGF2a production that result in far-ranging consequences on systemic maternal hormone levels.Also, there is no obvious spatial relationship between the PGF2a-production by glands and the changes in COX1 and COX2 expression.How can this be explained?Thanks for this constructive concern.It has been well established that decidual COX2 derived PGF2α is critical for luteolysis through circulation of uterus and ovary (PMID: 9751758) and then contribute to the change of systemic hormone levels.Our data showed that the critical enzymes for PGF2α synthesis, such as Ptgs1 and Akr1b3, expressed in luminal epithelial cells at implantation site and luminal and glandular epithelium at interimplantation sites by co-staining with CK8 (Fig. 11A-C).These results suggest that, apart from stroma cells in decidua, epithelium, including both luminal and glandular epithelium, Thanks for this constructive suggestion.The epithelium specific deletion efficiency of SHP2 was provided in Fig. 3A in the manuscript.To estimate whether squamous epithelial layer was affected at the absence of SHP2, we conduct co-immunostaining of CK8 and Ecadherin and HE staining in Shp2 f/f and Shp2 d/d uterine (Fig. 12A-B).The result indicates that the squamous epithelial layer was not affected in Shp2 deficient epithelium on day 19 (Line 249-250). The original study of the establishment of Ltf-iCre mice show that there is little to low Cre recombinase in other estrogen-responsive epithelium, such as vagina and oviduct, but with strong Cre recombinase activation in uterine epithelium, seminal vesicle, cauda epididymis, ductus deferens and caput epididymis (PMID: 24823394).We also evaluate other estrogen-responsive epithelia outside the uterus, such as oviduct and cervix of day 19 in Shp2 f/f and Shp2 d/d mice.The histological structure is comparable in both genotypes (Fig. 13A-B).The lack of epithelial organoid growth from uteri at E4 does not relate to a role for Shp2 in parturition. Thanks for this concern very much.It is inappropriate to speculate the role of epithelial SHP2 in parturition from early pregnant stage.Organoid growth from day 19 epithelium has been also estimated.Our results suggest that SHP2 deficiency compromised epithelium growth and organized formation at later stage (Fig. 1), which further corroborate the observation that sufficient epithelium regeneration is important for parturition (Line 181-182). There are multiple occasions of mix-ups in the figures vs legends between E16 and E19 that confuse the results.Thanks for this reminding very much.We had checked our manuscript carefully and modified these errors in text. There is no decidualization at labor.This process is completed far earlier.This statement in the text (line 367) encapsulates the major pitfalls of this manuscript that mixes up processes that occur at very different developmental time points. Thanks for this comment very much.It is true that the decidualization is initiated at early stage in both human and mice.While there is considerable decidualized stromal cell in maternal part in mice.The localization of decidualization marker Prl8a2 indicate that these decidualized stromal cells mainly localize in the compact zone (Fig 5A in our revised manuscript).In our scRNA-Seq, since the decidualized stromal cells were removed manually, there is no decidualized stromal in our scRNA-seq datasets.The presence of decidualized stromal cells is proved by multiple scRNA-Seq and functional studies in mice approaching parturition (PMID: 36599348, 31067461, 27454290, 23979163).The exist of decidualized stromal cells in human has also been proven as there are also PRL positive cells at labor, although the number is significant less than mouse (PMID: 35260533). The link to TLR4 mediated inflammatory responses is poorly developed and perhaps unnecessary for a manuscript focused on Shp2 function in the induction of labor.Thanks for this suggestive concern.To exclude the distraction of major focus of this study of the physiological significance of epithelial SHP2 in parturition, the part of TLR4 mediated inflammatory responses is removed in the revised manuscript. How does uterine epithelial-specific ablation of Shp2 affect the ovary?This link is poorly evolved. Decidua derived PGF2α is critical for luteolysis in ovary approaching parturition as stated in question 5 above.Previously study had proven that COX-1-deficient mice show delayed parturition due to impaired luteolysis accompanied with elevated serum progesterone concentration (PMID: 9751758).In this study, COX-1 expression in epithelium is significantly downregulated by SHP2, which in turn affects luteolysis and contributed to unsuccessful parturition initiation. The manuscript confuses insights from the mouse model with those in humans.Clear distinctions should be made in all statements that relate to either one or the other.For example, the term 'uterine milk' is usually only applied in humans.Similarly, 'prostaglandin activity in amniotic epithelium as an important initiator of labor at term' refers to the situation in humans. Thanks for this suggestion.We have carefully modified our manuscript to distinguish the statements of mouse models and these in humans.For example, we have changed the term 'uterine milk' to 'glands were important source of nutrients for both human and mouse' (Line 136-137), and modified 'prostaglandin activity in amniotic epithelium as an important initiator of labor at term' into 'prostaglandin activity in human amniotic epithelium as an important initiator of labor at term' (Line 344-345). The manuscript needs major contextual editing for language. We have made major language editing of this manuscript by a native speaker. Additional points: 1.The inconsistent use of "mouse uteri" throughout the text is distracting.Please, substitute "mice uteri" and "mice uterus" to "mouse uteri". We had substitute all "mice uteri" and "mice uterus" to "mouse uteri".Thanks for this suggestion, the composition and concentration of organoid expansion medium has been provided in revised manuscript and listed as below. Supplementary Table 4 The major phenotype of epithelial SHP2 deficient mice is delayed parturition.To exclude whether this phenotype is originated from defect of early stage of pregnancy, day 4, the day before embryo implantation and gland extension, is selected for histomorphological and deletion efficient analysis.Since there embryo implantation and decidualization appears comparable in both genotypes, day 19, the day approaching parturition with higher expression of contraction associated proteins, including OXTR and CX43, is selected to analyze the difference of cell heterogeneity and gene expression in different cell types. 6. Line 249: Unsure where this fits in this research -why focus on inflammation now and why jump from delayed onset of labor to inflammation induced PTB therapy? Thanks for this suggestive reminding.To focus more on the major finding of this studying on the failure of parturition, the part about inflammation and inflammation induced PTB therapy have been removed in revised manuscript as stated above. Reviewer #2 The manuscript by Liu and coworkers describes the endometrial cell types and impact of SHP2 in the endometrial epithelium in the non labor (day 16) and labor (day 19) mouse uterus.This manuscript gives important data on the cell types and potential communication pathways in the mouse uterus during this period.It also shows transcriptomic changes in these cell types at these stages.The manuscript then goes to investigate the role of SHP2 in the endometrial epithelium by conditional ablation of this gene using the LtfCre model. This analysis shows that loss of SHP2 in the epithelium results in a delay in parturition. The delay is in part due to a decrease in prostaglandin synthesis and this delay can be rescued by Prostaglandin treatment.This is an important manuscript because it not only defines the endometrial cell types during pregnancy and labor but also shows a critical role for the endometrial epithelium during this process.There are two minor weaknesses in this manuscript 1.The LPS experiment is not well developed.It states that LPS was given on Day 4. Why was this day chosen and not later in pregnancy.What is the significance of the RNA seq in this approach since the actions of SHP2 are later in pregnancy.This data should be removed. Thanks for this suggestive information.After careful consideration and per the suggestion of reviewer 1 and 2, LPS experiments are removed in our revised manuscript due to the distraction of this part of experiments. 2. What is the mechanism of SPH2 regulation of parturition.IN the discussion it is suggested that it may involve the regulation of P4 signaling.This should be evalusted at least bioinformatically by comparing the gene expression changes to know genes identified by transcriptomics or cistromics to PGR signaling. Thanks very much for this suggestive comment.In current study, our evidences suggest that SHP2 participates parturition by regulating epithelial COX1 expression and PGF2a production to facilitate luteolysis.To further evaluate the effect of SHP2 on PR signaling pathway, we compare the differentiated expressed genes (DEGs) in both stromal and epithelial cells with known genes identified by cistromics of PGR (GSM857546: PR ChIP-Seq of P4 treated uteri in ovariectomized mice; GSM5964410: PR ChIP-Seq in day 14). The results show that one third to half of DEGs in both epithelium and stroma are PR target genes in both datasets (Fig. 15).These results confirm that PR signaling is altered in epithelial SHP2 deficient mice. Reviewer #3 The manuscript authored by M. Liu and associates describes experiments to establish the factors relevant to the differences in gene expression between the late gestation and the periparturient uteri in the mouse.The methods employed include scRNAseq and consequent bioinformatic analysis, ChatCell to determine the cell interactions in the uterus around parturition and conditional depletion of a gene, Shp2 in the uterine epithelium.The results demonstrate that deletion of this gene does not affect embryo implantation or establishment of the deciduum, but that it has an effect on the late pregnancy changes in this tissue that normally accompany the birth of the litter.This is a novel and noteworthy finding, as it opens a new horizon in exploration of parturition.A strength of the manuscript is the cogent discussion of this new and important information.In terms of the methods employed, it would be useful to have more information of the quality control aspects of the scRNA analysis. 1.This manuscript has much to recommend it.It is a novel exploration resulting in the discovery of a previously unknown mechanism, i.e. the role of the uterine epithelium in the process of parturition.The single cell global gene analysis appears to have been appropriately conducted and the presence of scatterplots in the figures is indicative of the high quality of the data.The manuscript will require extensive language editing, as there are many, many syntax, grammar and spelling errors that render it difficult to read and detract from the overall presentation. We have made extensive language editing to avoid syntax, grammar and spelling errors in the manuscript. 2. Line 32: It would seem that decidualization, a process that begins at implantation on day 5 of gestation was normal, not aberrant.The process that was disturbed was parturition in this study, thus the abstract is misleading.This requires clarification. Thanks for this comment.We have modified the abstract to avoid the misleading of the abstract. 3. Line 119: It is stated that the signature of epithelial cells in labor is portrayed.Although parturition in the mouse usually occurs on day 19 post mating, there is some variability. How was it determined that the animals were in labor at the time of collection of the uterine tissues?This is a very good suggestion.The uterus needs to be appropriately prepared before parturition characterized with increased expression contraction-associated proteins (CAP) at day 19, including OXTR and CX43.Then, the muscle layer will become contractive to expel the fetus at the night of day 19.In current study, the major scope is to dissect the significance of epithelial SHP2 during labor preparation.The statements that "the animals were in labor" were misleading, and had been change to "the animals were before labor". 4. Figure 1F is not labelled.The Figure 1F had been labelled. Figure 3C: There is no information about whether these differences are statistically significant. The statistical significance is calculated by Fisher's exact test and the p value has been added in the revised manuscript. 6. Lines 188 et seq: Was it determined whether Shp2 was depleted in the ovary?Ltf-driven Cre recombinase have shown no express in ovary based on previously work (Fig. 16) (PMID: 24823394).We also detected the expression level of Shp2 in ovaries of both genotypes on day 19.The results showed that Shp2 expression was not affected in Shp2 deficient ovary (Fig. 17).Thanks very much for this suggestive information.To further confirm our ISH result of Star, wo also did real-time PCR experiment to compare the expression of Star in the ovaries of both genotypes.There is no significant change of Star mRNA expression (Fig. 16). Although the mRNA level of Star detected by ISH and real-time PCR is not changed very much, the protein levels of STAR is much higher in SHP2 deficient ovary which might be due to the translational modification of STAR as reported before (PMID: 19321517).We hope that our responses are satisfactory.Again, we express our gratitude to the reviewers and the editor for efficient handling of this manuscript.Overall, however, major issues still remain.The identification of HB-EGF up-regulation between E16 and E19 does not justify the focus on Shp2.It rather appears that the Shp2 data were available prior to the single cell data, and that the post-hoc addition of the scRNA-seq data prompted the authors to make a tenuous connection to justify the use of the Shp2 model. The study would be presented in a much stronger way if it started with the Shp2 data in Figure 3, and focused on the inclusion of scRNA-seq data from this knockout model.This part of the manuscript is also much better written.I hence urge the authors to omit the first scRNA-seq part which comes with major issues, as highlighted below. Major comments: 1.It is still unclear what precise tissue was single cell sequenced and Figure 1a is confusing.Given what is stated in the text, the sequenced tissue should only entail the myometrial layer in Figure 1a.If that is correct, please re-draw Figure 1a to make this clear.Also, the representation of the decidua is incorrect, as this is only a thin layer overlying the placenta at this stage.The placenta and embryo should be indicated as well in the figure.These aspects could be depicted in a grey tones, and the sequenced tissue colour-coded in red (or similar).The same applies to later figures that incorporate this diagrammatic depiction. However, more importantly, this description in the text is completely inconsistent with the data and with later drawings in Fig. 7a and the staining in Supp.Fig. 1.It would rather appear that the entire region broadly labelled as "ST", which is de facto decidua and which contains the glands and luminal epithelium, was sequenced.This corresponds with the identified single cell clusters that show only minor contributions of muscle cells (which should be the dominant cell type if uterine tissue "after the removal of decidua, placenta and fetus" was sequenced).If that is so, the entire description of the approach in the text is wrong and/or misleading and needs to be revised. 2. Highlighted cell type-specific markers remain unverified.Notably "Epi" vs "gland" signature genes that are meant to distinguish uterine luminal and glandular epithelium, i.e.Msx1, Sox9, (Cebpd), Ehf, Foxp1 etc, need to be validated by immunostaining to verify their cell type specificity at this stage in gestation.This is immensely important as these are described as novel signature genes capable of distinguishing these two epithelial cell types.The nomenclature of "Epi" in this context is highly unfortunate, as both cell types are epithelial in character. 3. Some of these cell-type specific markers should be applied to the Shp2d/d uteri to determine which epithelial layers are particularly affected. 4. The two parts of this manuscript (scRNA-seq of WT deciduae/uteri and the Shp2 analysis) are not inter-linked, i.e. identified signature genes of specific cell types are not applied to the KOs, and vice versa the markers applied to the KOs have little or no relationship to the scRNA-seq data.This point further underpins the advice provided above that the first part of this manuscript should be omitted. 5. Please reword cell type specific expression in Epi_0, Gland_10 and prolifEpi_20.It is unclear what these populations are meant to be, especially as they are all epithelial cell types.This point relates to comment 2 above. 6.As noted in the summary above, the shift from Hbegf up-regulation to investigating Shp2 is a huge leap.The question arises, would similar phenotypes be seen on Hbegf deletion or ERK inhibition specifically in uterine epithelial cells.In any case, the Shp2 phenotype is likely far more pronounced than an Hbegf phenotype.As such, the study would be portrayed far better if it was focused on Shp2, without the single cell analysis upfront that is somewhat strenuously portrayed to justify the focus on Shp2. 7. The Akr1b3 data need to be shown as a separate channel, the faint red staining is invisible in the overwhelmingly green overlay. 8. Contrary to what is stated, PGF2a injection does not lead to more females delivering earlier.Please correct this statement.Furthermore, the n=5 of PGF2a injected females is insufficient.There is an inexplicably huge discrepancy in the data between Fig. 3c (Shp2d/d survival rate 61%) and Fig. 3n (Shp2d/d survival rate 40%), so a few animals more may make a huge difference.9. Please highlight that the parturition delay is only observed in some Shp2 d/d females, even if the various genes are de-regulated in all/most of them.Thus, the epithelial Shp2 deletion is only partially critical for determining the timing of parturition. Reviewer #2 (Remarks to the Author): Fig 2A -this figure would benefit from higher magnification and a double staining for epithelial cell markers alongside ERa.It is currently unclear what part of the tissue one is looking at, so clarification in the figure legend and appropriate labelling is required.3. Fig 1E, 1F, and 1J -all showing D16 (as compared to Sup Fig 2B-C) but figure legend says D19. Please revise as this is key information from your results and it is crucial to be reporting the correct timing.4. Line 458: Please provide full organoid expansion medium composition with final concentration of reagents in a table in Supplementary information.Boretto et al (2017) (reference 61) report varying concentrations of Rspondin-1 and Wnt3A as a test in their publications, hence this current manuscript should indicate what has been used.5.Line 174: Justification needed of why authors are using day 4 and day 19 Shp2-cKO mice 6. Line 249: Unsure where this fits in this research -why focus on inflammation now and why jump from delayed onset of labor to inflammation induced PTB therapy?Reviewer #2 (Remarks to the Author): Fig. 1 Fig. 1 The role of SHP2 for epithelial growth in the late gestation.Organoid growth of Shp2 f/f and Shp2 d/d epithelial cell from day 19 uteri. Fig. 4 Fig. 4 There are proliferative epithelial cells as re-formation of the uterine epithelium.The coimmunostaining of CK8 and KI67 in days 12 and 14 uteri.E: embryo; P: placenta. Fig. 5 Fig. 5 The location of glandular epithelium at day 19 in inter-implantation sites.HE staining and FOXA2 immunostaining in inter-implantation sites of day 19 uteri.M: mesometrial; AM: antimesometrial; E: embryo. Fig. 6 Fig. 6 3D images of day 5 uteri.Images of one uterine horn in Rosa26 tdTomato Ltf Cre/+ mice on day 5. IS: Implantation site, inter-IS: interimplantation site; M: mesometrial; AM: antimesometrial; * indicates embryos.(PMID: 29426931) Fig. 4 is more in keeping with expectation as for cell proportions.What is the difference?Thanks for this concern very much.The tissue collection of these two scRNA-Seq datasets is different.Fig. 1b mainly collect tissue from days 16 and 19, while Fig.4d collect tissue from day 19 in WT and Shp2KO mice.These two batch scRNA-Seq were carried out separately, the process of tissue collection, especially the remaining of stromal cells in placenta separation, digestion and others might contribute to this cell proportion discrepancy. is also another important site for PGF2α generation.The down-regulated Ptgs1 and Akr1b3 in Shp2 deficient epithelium indicate the important role of SHP2 in epithelial PGF2α-production (Line 226-228). Fig 11 . Fig 11.The spatial relationship between the PGF2α-production and COX1 and COX2 expression in Shp2 f/f and Shp2 d/d mice on day 19.A Sm-FISH of Akr1b3 by co-staining with CK8 in inter-implantation sites of Shp2 f/f and Shp2 d/d mice on day 19.M: mesometrial; AM: anti-mesometrial; E: embryos.B Sm-FISH of Ptgs1 by co-staining with CK8 in inter-implantation sites of Shp2 f/f and Shp2 d/d mice on day 19.M: mesometrial; AM: anti-mesometrial; E: embryos.C Sm-FISH of Akr1b3 by co-staining with CK8 in implantation sites of Shp2 f/f and Shp2 d/d mice on day 19. Fig. 12 Fig. 12 The structure of epithelium was not affected in the absence of Shp2.A Coimmunostaining of E-cadherin and CK8 in Shp2 f/f and Shp2 d/d mouse uteri on day 19.B HE staining in Shp2 f/f and Shp2 d/d mouse uteri on day 19. Fig. 13 Fig. 13 The epithelium of cervix and oviduct was not affected in Ltf-Cre induced conditional ablation of Shp2.A, HE staining in Shp2 f/f and Shp2 d/d mouse cervix on day 19.B, HE staining in Shp2 f/f and Shp2 d/d mouse oviduct on day 19. 2. SupplFig 2A -this figure would benefit from higher magnification and a double staining for epithelial cell markers alongside ERa.It is currently unclear what part of the tissue one is looking at, so clarification in the figure legend and appropriate labelling is required.We have conducted a double staining for epithelial cell marker, CK8, alongside ERα at decidua on day 19 (Fig.14), and we have modified the figure legend and added corresponding label (Line 990-992). Fig. 15 Fig. 15 The overlapping of PR-target genes evidenced by PR ChIP-Seq with DEGs in both epithelium and stroma revealed by scRNA-Seq. Fig. 16 Fig. 16 The mRNA levels of Star in Shp2 f/f and Shp2 d/d ovary.Quantitative real-time PCR analysis of Star in Shp2 f/f and Shp2 d/d ovary on day 19.The values are normalized to Gapdh and indicated as the mean ± SEM (n=3 biologically independent samples).Twotailed unpaired Student's t-test, ns: not statistically significant. Please do not hesitate contact me, should you have any questions.This is a revised manuscript by Liu et al that describes a potential role for the non-receptor protein tyrosine phosphatase Shp2 in parturition initiation.The study has been improved in particular by providing overview staining images that delineate where the luminal and glandular epithelial compartments persist in late gestation implantation sites.
9,605.4
2023-11-14T00:00:00.000
[ "Biology", "Medicine" ]
Wide-Band Wide-Beam Circularly-Polarized Slot-Coupled Antenna for Wide-Angle Beam Scanning Arrays The design of a wide-band wide-beam circularly-polarized slot-coupled (WWCS) radiating element for wide-angle scanning arrays (WASAs) is addressed. The WWCS radiator exploits a simple geometry composed of a primary (driven) and a secondary (passive) element to generate wide-beam patterns with rotational symmetry and high polarization purity. The synthesis was carried out by means of a customized version of the System-by-Design (SbD) method to derive a WWCS radiator with circular polarization (CP) and wide-band impedance matching. The results of the numerical assessment, along with a tolerance analysis, confirm that the synthesized WWCS radiating element is a competitive solution for the implementation of large WASAs. More specifically, a representative design working at f0=2.45 [GHz] is shown having fractional bandwidth FBW≃15%, half-power beam-width HPBWf0≃180 [deg] in all elevation planes, and high polarization purity with broadside axial ratio ARf0=3.2 [dB] and cross-polar discrimination XPDf0=15 [dB]. Finally, the experimental assessment, carried out on a PCB-manufactured prototype, verifies the wide-band and wide-beam features of the designed WWCS radiator. Introduction In recent decades and within the rapid development of modern wireless systems, there has been a continuously growing interest in beam-scanning antennas [1][2][3]. In such a framework, traditional reflectors provide excellent radiation features (e.g., high gain), but they are bulky and heavy. Moreover, mechanical scanning implies a slow reconfigurability of the main beam direction. Phased antenna arrays are excellent alternative since they guarantee an agile/flexible beam scanning [1,4,5]. As a matter of fact, they have been widely employed in satellite communications, radars, and meteorology [1,4]. Moreover, they will be key technology in next-generation mobile communications systems (i.e., 5G/6G and beyond [2,3]). Microstrip patch antennas are very popular elementary radiators for phased arrays thanks to several advantages, i.e., they are lightweight, have low profiles, and involve simple/low-cost manufacturing [6][7][8]. However, conventional microstrip-based arrays are usually narrowband [9,10] and they generally exhibit limited scanning capabilities [11]. Since these limitations prevent their use in several applications where a large field-of-view (FOV) in a wide-band is required, great efforts have been devoted toward studying innovative Table 1. Comparison in terms of central frequency ( f 0 ), fractional bandwidth (FBW), polarization, elevation HPBW at the central frequency, and overall size (in wavelengths at f 0 , λ 0 ), between the proposed WWCS antenna and wide-beam designs recently appeared in the scientific literature. [21] Comb-slotloaded 8.25 ÷ 11.5 7.6 ÷ 9.1 LP 83 ÷ 103 0.55 × 0.55 × 0.16 patch [22] Probe-fed u-slotted patch 3 Some interesting approaches implement the wide-beam behavior by adding parasitic elements (e.g., vertical electric walls [22,23], patches [11,19], or rings [20]- Table 1) where additional current components are induced to radiate end-fire patterns that constructively sum with those radiated by the main radiator. Following this guideline, both linearly (LP) [11,[21][22][23] and circularly (CP) polarized [20,25,28] (Table 1) and [11,[21][22][23] and circularly (CP) polarized [20,25,28] (Table 1) wide-beam radiators were synthesized even though the CP ones have several advantages with respect to those with LP. For instance, there is an improved immunity to the multi-path distortion, polarization mismatch losses, and Faraday rotation effects caused by the ionosphere in satellite communications [26,27,31]. Thus, CP wide-beam radiators are a very promising technological asset for many wireless systems including global positioning and navigation systems (GPS and GNSS), radars, satellite communications, radio frequency identification, mobile communications, and wireless local area networks [26][27][28]. Accordingly, this paper proposes a novel wide-band wide-beam CP slot-coupled (WWCS) antenna based on the combination of a primary (driven) and a secondary (passive) element to generate large-HPBW patterns with rotational symmetry and high polarization purity. More specifically, a 3D microstrip layout is obtained by placing a dielectric layer hosting a metallic ring at a proper distance from a circular patch. By properly exciting a CP current within such a parasitic element, a torus-shaped pattern with maximum gain on the azimuth plane is radiated, thus triggering an increased end-fire gain which, combining to the broadside radiation of the underlying patch, results in a wide beam along every elevation plane. Unlike the narrowband design in [20] (having a fractional bandwidth of FBW = 1.2%- Table 1), the proposed radiating element is characterized by (i) a wide-band impedance matching (i.e., FBW = 15%- Table 1) as well as (iii) a simpler feeding mechanism for CP (i.e., slot-coupling versus probe feeding) [7]. Moreover, unlike the single-element design in [20], the possibility to exploit such an element in a WASA is addressed, as well. Therefore, the main novelties of this work consist of (a) the design of a new wide-beam CP radiator exploiting an aperture coupling feeding mechanism to significantly widen the impedance bandwidth and overcome spurious radiation, narrowband operation, and more complex manufacturing of probe-fed layouts in the literature [20], (b) the formulation of the arising synthesis problem, unlike the parametric trial-and-error approach used in [20], as a global optimization enabling a more effective control of the CP in the complete radiating semi-sphere and a proper impedance matches within the user-defined wide bands, (c) its efficient solution by means of a customized system-by-design (SbD) methodology, and (d) the wide-band assessment of the suitability of the WWCS for implementing large planar WASAs, differently from [20] where only the single radiator is considered. The manuscript is organized as follows. Section 2 describes the layout of the WWCS radiator. The SbD-based synthesis strategy, which is used for the synthesis of this radiating element, is detailed in Section 3. A representative example, which is concerned with a LHCP design, is illustrated in Section 4 to numerically assess, via full-wave (FW) simulations along with a tolerance analysis, the effectiveness of the proposed radiator when implementing wide-band WASAs. Finally, the experimental assessment of the designed WWCS radiator, carried out on a PCB-manufactured prototype, is shown in Section 5. Eventually, some conclusions and final remarks are presented in (Section 6). Figure 1 shows a geometric sketch of the layout of the proposed WWCS radiator. The antenna lies on the (x, y) plane and it comprises L = 3 square dielectric layers l , l = 1, . . . , L of side L a . The thickness, the relative permittivity, and the loss tangent of the l-th (l = 1, . . . , L) layer are denoted with H l , ε rl , and tan δ l , respectively. The two stacked bottom layers (i.e., 1 and 2 ) are relative to the primary antenna element, which consists of a circular microstrip patch of radius R p , printed on the layer 2 [ Figures 1 and 2b]. Such a patch is fed with an aperture-coupling mechanism. Towards this end, a cross-shaped slot is etched in the ground plane that separates the layers 1 and 2 [ Figures 1 and 2a], which is in turn excited with a microstrip feeding line of width W f and characteristic impedance Z 0 . This latter is printed on the bottom face of 1 [Figures 1and 2a]. To maximize the EM coupling, the microstrip line, the slot, and the patch are aligned with respect to the (x, y) plane ( Figure 1). Moreover, the feeding line is terminated into an open-circuited stub whose length L f [ Figure 2a] is properly tuned so that the standing-wave current, induced within the microstrip, is maximum at the slot barycenter [6]. It is worth pointing out that, even though a multiple-layer etching manufacturing process is required, the adopted aperture feeding enables some advantages with respect to a probe/pin-based choice [6]. For instance, (a) is a wide-band impedance matching, (b) is a an easier construction, since it avoids the vertical pin that would require additional drilling and soldering processes, and (c) is the higher polarization purity and pattern symmetry since the vertical pin would behave as an additional monopole degrading the overall axial ratio (AR) and cross-polar discrimination (XPD). Moreover, the use of independent substrates for both the circular patch (i.e., 2 ) and the feeding line (i.e., 1 ) gives the designer more flexibility in selecting the optimum dielectric support for each antenna "building block" with respect to a solution with coplanar edge feeding (either direct or inset-based) [6]. WWCS Antenna Layout As for the shape of the slot, a 45-degrees rotated cross, with unequal arms of width W 1 (W 2 ) and length L 1 (L 2 ) [ Figures 1 and 2a], was adopted to realize the desired circular polarization (CP). As a matter of fact, the introduced asymmetry allows one to excite, by injecting a current into the feeding line and exploiting the aperture coupling mechanism, two orthogonal current components having a phase difference of 90 [deg] onto the patch. As a result of the combination of such excited modes, a CP current is yielded, which in turns radiates a CP field. More specifically, left-hand (LHCP) or right-hand (RHCP) CPs are obtained by simply letting L 1 < L 2 or L 1 > L 2 , respectively [7]. Otherwise, the polarization switching (LHCP ⇔ RHCP) could be yielded by simply mirroring the cross aperture with respect to the y-axis. Thanks to such a modeling, it is possible to enforce a CP by means of a simple design and manufacturing process, since there is no need for two separate orthogonal microstrip lines. Moreover, a simple circular patch can be used by avoiding more complex solutions such as, for instance, a primary element with elliptic-shape (that would imply the tuning of the two semi-axes) or electrically-small perturbations of the external border of the patch (e.g., stubs or notches) to yield an AR close to one [7]. The top layer (i.e., 3 ) hosts the secondary element of the antenna, which is implemented as a metallic ring of inner radius R r and width W r [ Figures 1 and 2c]. Such a parasitic element is "activated" by an air coupling mechanism by placing the layer 3 at a proper distance D above the patch ( Figure 1). Overall, the total height of the WWCS antenna turns out to be ( Figure 1) The secondary passive element shares a geometric rotational symmetry with the primary active one to obtain a high polarization purity and an azimuth-invariant radiation pattern, which is a highly desirable feature for WASAs [11]. Indeed, by properly exciting a CP current within the parasitic ring [20], a torus-shaped pattern with maximum gain on the azimuth plane [i.e., θ = 90 [deg]- Figure 1a)] is radiated. The metallic ring shape is selected to assure that the arising parasitic radiation mode triggers an increased end-fire gain. As a consequence, the combination of the field radiated by the primary element [having maximum gain at broadside, i.e., θ = 0 [deg]- Figure 1a] and the secondary radiator generates a wide beam with half-power beamwidth close to HPBW(ϕ) = 180 [deg] along every elevation plane ϕ ∈ [0, 360] [deg] [ Figure 1a]. Design Methodology In order to address in a computationally-effective way the synthesis problem at hand, a customized implementation of the system-by-design (SbD) paradigm [32] is exploited and briefly summarized in the following. More specifically, the "Problem Formulation" SbD functional block [32] is customized to (i) define a proper set of geometric descriptors of the WWCS layout and (ii) formulate a suitable multi-objective cost function accounting for several user-defined requirements on both impedance matching and radiation features. Concerning (i), once the characteristics of the substrates (i.e., material/thicknesses) of the layers l , l = 1, . . . , L, and the width of the microstrip feeding line W f are determined as detailed in [8] (p. 148, Equation 3.197) to yield the desired characteristic impedance Z 0 (e.g., Z 0 = 50 [Ω]), the set Ω = {Ω k ; k = 1, . . . , K} of geometric descriptors (Figures 1 and 2) is being auxiliary parameters (0 < α < 1; 0 < β < 1) that avoid the generation of physicallyunfeasible geometries for the secondary element by enforcing the constraints R r < L a 2 and W r < L a 2 − R r , respectively [ Figure 2c]. The synthesis problem at hand can then be stated as follows: WWCS Antenna Design Problem-Determine the optimal setup of the degreesof-freedoms (DoFs), Ω (opt) , such that the corresponding WWCS radiator (i) exhibits a suitable impedance matching within the user-defined wide frequency range f min ≤ f ≤ f max , (ii) radiates an azimuth-invariant wide-beam pattern suitable for WASAs, and (iii) implements a LHCP/RHCP with high polarization purity within the half-space region Figure 1a]. As for (ii), because of the conflicting requirements on the bandwidth and the radiation features as well as the non-linear dependence of these latter on Ω, the original synthesis problem is recast into a global optimization one, where Φ(Ω) being the cost function, which quantifies the mismatch with the synthesis targets, given by where Γ = {S 11 , HPBW, AR, XPD}, and w γ is a real weight associated with the γ-th cost function term Φ γ (Ω). More in detail, the impedance bandwidth term of the cost function Φ (γ = S 11 ) is defined as follows is the reflection coefficient at the antenna input port, Z in f q Ω being the input impedance. Moreover, S th 11 is the desired threshold and ] is the q-th (q = 1, . . . , Q) frequency sample, Q being the number of spectral components analyzed with full-wave (FW) simulations. Finally, H{ . } is the Heaviside's function, equal to H{ζ} = 1 if ζ > 0 and H{ζ} = 0, otherwise. As for the wide-beam features, the HPBW cost term (γ = HPBW) is given by where HPBW th is the user-defined requirement, while Figure 1a], M being the number of elevation planes considered for the numerical evaluation of the HPBW. The last two cost function terms in (3) (i.e., γ = AR and γ = XPD) are related to the CP and they are defined as follows and In the previous expressions, Figure 1a], AR th is the maximum AR given by [6] AR where the subscripts "C" and "X " denote the co-polar and the cross-polar field components, respectively (i.e., C ← LHCP and X ← RHCP if a LHCP antenna is designed, and viceversa for RHCP operation), equal to being the far-field electric field, (·) is the dot product, and ( . ) * stands for complex conjugate. Moreover, ρ C and ρ X are the polarization unit vectors for the two CPs Finally, XPD th is the minimum XPD being where is the gain related to the C/X -th field component, respectively, η 0 is the free-space impedance, while P acc f q Ω is the accepted power at the antenna terminals for a given incident power P inc The overall SbD-driven design work-flow consists of the following procedural steps: 1. Input phase. Define the bounds of the target's operating band, f min and f max , the required CP (i.e., LHCP or RHCP), and the threshold value for each key performance indicator, Γ th = S th 11 , HPBW th , AR th , XPD th . Perform the following operations (a) Set L a = λ 0 2 , λ 0 being the free-space impedance at the central frequency Select from an off-the-shelf data-sheet the material/thickness of the l-th (l = 1, . . . , L) layer l ; (c) Compute the width of the feeding line W f to yield the desired characteristic impedance Z 0 (p. 148, Equation 3.197 [8]); (d) Derive an analytic guess, R p , for the radius of the primary element of the radiator as detailed in [33] (p. 846, Equation 14.69), then set its optimization range Ω (min) 1 and Ω (max) 1 as a percentage of R p , being Ω 1 = R p ; (e) Define the optimization bounds of the remaining 3. Design Initialization (i = 0)-Define an initial swarm of P particles, P 0 = Ω (p) SbD Design Loop (i = 1, . . . , I) -Iteratively update the swarm positions and velocities by applying the PSO-OK/C updating rules [32], and leveraging on both the cost function predictions and the associated "reliability estimations" outputted by the SM. As for the latter, the training set at the i-th (i = 1, . . . , I) iteration, T i , of size S i = (S 0 + i), comprises progressively-added training samples according to the SbD "reinforcement learning" strategy [32] aimed at refining the prediction accuracy within the attraction basin of Ω (opt) ; 5. Output Phase-Output the final setup of the DoFs, Ω (opt) , whose corresponding layout best fits all user-defined requirements. Numerical Assessment This section is aimed at illustrating the performance of the proposed WWCS antenna model. Towards this end, the synthesis of a LHCP-polarized radiator working in the [20]) was addressed. The Rogers RO4350B substrate was chosen for the L = 3 layers (ε rl = 3.66, tan δ l = 0.004, l = 1, . . . , L) with thicknesses set to According to [8], the width of the microstrip feeding line turns out to be W f = 6.65 [mm] for Z 0 = 50 [Ω], while the analytic guess of the patch radius is set to R p = 17.46 [mm] [33]. The PSO-OK/C parameters were chosen by following the literature guidelines to yield a time saving of ∆t sav = 86% with respect to a standard optimization based on a bare integration of the global optimizer and the FW simulator to compute the cost function values in correspondence with each trial antenna layout [32]. More specifically, the swarm size, the number of iterations, the social/cognitive acceleration coefficients, the inertial weight, and the initial training size were set to P = 9, I = 200, C 1 = C 2 = 2, ω = 0.4, and S 0 = 45, respectively. Moreover, the numerical evaluation of (3) The geometric descriptors of the SbD-optimized layout are reported in Table 2, while the corresponding layout, modeled in the Ansys HFSS FW simulator [38] and having an overall height of T = 71.98 [mm] (T = 0.59 [λ 0 ] - Table 1), is shown in Figure 3. Going to the analysis of the antenna performance, Figure 4 shows the simulated reflection coefficient at the antenna input port versus the frequency. As it can be observed, such a radiating element fully complies with the requirement since S In more detail, it turns out that S (dB) 11 f |Ω (opt) ≤ S th 11 for an even wider frequency interval ( f ∈ [2.29, 2.66] [GHz]) by assessing the wide-band behavior of the proposed design with an overall fractional bandwidth of FBW| WWCS = 15% [39], while, for instance, the state-of-the-art solution in [20] is limited to FBW| [Pan 2014] = 1.2% ( Figure 4 and Table 1). (a) (b) As for the radiation features, Figure 5 shows Figure 5c], it is reasonable to indicate the proposed antenna such as a wide-beam one suitable for implementing WASAs. It is worth noticing that such a feature has been yielded thanks to the constructive combination of the fields radiated by the primary and secondary sources. To better illustrate the EM phenomena and interactions, Figure 6 shows the 2D plot of the magnitude of the electric field, |E(x, z)|, on a vertical surface parallel to the (x, z)-plane and crossing the barycenter of the antenna. As it can be observed, the air coupling between the bottom (primary) and the top (secondary) element of the radiator at hand guarantees a proper excitation of the parasitic element by enabling the generation of a wide beam in the far-field region. The wide-beam behavior of the synthesized WWCS antenna in a wide frequency is "detailed" in Figure 7a The optimized WWCS layout exhibits the desired LHCP operation and is pointed out by both the co-polar, G LHCP ( f 0 , θ, ϕ), and the cross-polar, G RHCP ( f 0 , θ, ϕ), gain patterns in Figure 5, where it can be clearly observed that G( f 0 , θ, ϕ) ≈ G LHCP ( f 0 , θ, ϕ) and G LHCP ( f 0 , θ, ϕ) G RHCP ( f 0 , θ, ϕ) for 0 ≤ θ ≤ 90 [deg] with broadside AR and XPD equal to AR( f 0 , θ = 0, ϕ = 0) = 3.2 [dB] and XPD( f 0 , θ = 0, ϕ = 0) = 15 [dB], respectively. Moreover, such a good polarization purity is kept almost unaltered in the complete radiating upper semi-sphere with the exception of the elevation angles close to the antenna end-fire as illustrated by the 2D maps of AR( f 0 , θ, ϕ) [ Figure 8a] and XPD( f 0 , θ, ϕ) [ Figure 8c] as well as by the corresponding thresholded pictures aimed at highlighting the fulfilment of the design requirements [ Figure 8b,d]. It is worth remarking that the slight degradation of both AR and XPD appears only in the most challenging region (i.e., θ 90 [deg]) and is possibly due to the spurious radiation by the slot along the directions of its major arms (i.e., ϕ = 135 [deg] and ϕ = 315 [deg]- Figure 8 and Figure 3). In order to assess the excitation of a LHCP, the plot of the magnitude of the instantaneous surface current density, J sur f (x, y; t) , is reported in Figure 9 at four consecutive instants ]. One can observe that the fundamental mode TM 11 is properly excited on the circular patch [33] and there is a clock-wise rotation of the corresponding surface current distribution (Figure 9). The vector plot of the electric field distribution at a quota of z = 10λ 0 , E(x, y; t), shown in Figure 10 for the same instants, further verifies the desired CP of the radiated wave, which evolves in time according to a LHCP. For comparison purposes, Figure 11 plots the broadside gain G( f , θ = 0, ϕ = 0) [ Figure 11a] and the AR AR( f , θ = 0, ϕ = 0) [ Figure 11b] within the band of interest of the proposed WWCS model and of the design in [20]. It turns out that the synthesized radiator exhibits a good AR performance, especially within the band f ∈ [2.4, 2.6] [GHz] where AR| WWCS ≤ 6 [dB] [40], that results in an AR bandwidth (ARBW) equal to ARBW| 6 dB WWCS = 8%, while ARBW| 6 dB Finally, the suitability of the WWCS radiator as elementary building block of circularlypolarized wide-band WASAs was assessed. Towards this end, the radiation features of a large planar uniform phased array, comprising N = (50 × 50) WWCS identical elementary radiators, were studied. To account for the mutual coupling in this large aperture, a periodic model was simulated in HFSS. Figure 12c]. The SL on both elevation planes is always smaller than 8.5 [dB] at the central frequency [ Figure 12a]. Moreover, it is worth noticing that there is a good stability of the sidelobe level (SLL) when scanning the beam on both planes [i.e., −13.6 ≤ SLL ≤ −8.2 [dB]- Figure 13a]. In addtion to the numerical assessment, a tolerance analysis has been carried out to give the interested reader some insights into the reliability and robustness on the fabrication tolerances of the proposed antenna layout both stand-alone and within an array arrangement. First, the height of the parasitic element, D, has been supposed to deviate of ±5% and ±10% from the nominal value D (opt) (Tab. II) because of some manufacturing tolerances. Figure 15 summarizes the results of the tolerance analysis versus the frequency for the input reflection coefficient [ Figure 15a], the broadside AR [ Figure 15b], and the HPBW along the ϕ = 0 [deg] [ Figure 15c] and the ϕ = 90 [deg] [ Figure 15d] planes. As it can be inferred, the proposed antenna layout turns out to be quite robust. More precisely, the wide-band [ Figure 15a] and the wide-beam [ Figure 15c,d] characteristics of the WWCS radiator are confirmed regardless of the non-negligible fabrication tolerances on D, the fractional bandwidth being equal to FBW = 12.1% in the worst case [i.e., D = D (opt) − 10%D (opt) - Figure 15a]. As a consequence, the scan loss value of the array, SL( f ), within the working frequency range, f min ≤ f ≤ f max , is quite stable in both elevation planes (Figure 16), as well. Experimental Assessment The experimental validation of the performance of the designed WWCS has been carried out next ( Figure 19). In order to exploit available off-the-shelf RO4350B PCB boards, two (layers 2 and 3 - Figure 1) and three (layer 1 - Figure 1) substrates of thickness h = 1.52 [mm] have been stacked to realize the different layers of the antenna. The overall structure has been assembled using four nylon M4 threaded rods and sixteen nylon bolts, stacking together the PCBs and placing the parasitic ring at distance D = D (opt) from the driven patch ( Figure 19). An RS 759-5252 SMA connector has been used to feed the antenna prototype. Figure Figure 20. Experimental Assessment-Comparison between the simulated and measured reflection coefficients at the antenna input port. As for the radiation features of the fabricated antenna, the far field patterns have been measured inside an anechoic chamber having dimensions 9 × 6 × 6 [m 3 ]. The AUT has been placed on a remotely controlled rotating frame and the electric field has been measured by means of a circularly polarized probe connected to a signal analyzer, both placed on a dielectric mast at a distance of 3 [m] from the AUT. In order to avoid field perturbations due to cablings, the AUT has been connected with a short coaxial cable to a small transmitter able to generate a constant amplitude and frequency signal at f = f 0 = 2.45 [GHz]. The transmitter has been placed just behind the layer 1 of the AUT. Similarly, the presence of a long coaxial cable connected to the field probe has been avoided thanks to the use of the PMM 9060 EMI Receiver/Signal Analyzer (30 [MHz]-6 [GHz]) that can be remotely controlled by means of a fiber optic link. A good matching between the simulated and measured gain pattern has been obtained. As a matter of fact, both pattern cuts along the ϕ = 0 [deg] [ Figure 20a] and the ϕ = 90 [deg] [ Figure 21b] elevation planes closely match the outcomes of the numerical assessment. Moreover, it turns out that the measured HPBW verifies the wide-beam behavior of the radiator on both planes, being HPBW( f 0 , ϕ = 0)| meas = 151 [deg] [ Figure 21a] and HPBW( f 0 , ϕ = 90)| meas = 172 [deg] [ Figure 21b], respectively. Finally, the measured gain, AR, and XPD are equal to G( f 0 , θ = 0, ϕ = 0)| meas = 2.8 [dB], AR( f 0 , θ = 0, ϕ = 0)| meas = 3.3 [dB], and XPD( f 0 , θ = 0, ϕ = 0)| meas = 14.8 [dB], respectively, thus verifying a good matching with the simulated values. Conclusions The design of a novel wide-band wide-beam circularly-polarized elementary radiator has been proposed for WASAs. Such a WWCS structure leverages on a cross-shaped aperture-coupling feeding mechanism to achieve wide-band LHCP/RHCP operation using a simple circular patch and a single microstrip line. Moreover, it takes advantage of the air coupling between the primary and secondary EM sources to realize rotationalsymmetric patterns with large elevation HPBWs and high polarization purity in the complete upper semi-sphere. The computationally-efficient synthesis of the layout of the WWCS antenna, which supports the desired CP operation, has been carried out with a customized implementation of the SbD paradigm. Accordingly, the main advancements with respect to the state-of-the-art [20] include (i) the exploitation of an aperture feeding mechanism instead of a probe feeding to significantly widen the impedance bandwidth, mitigate spurious radiation, and enable an easier manufacturing, (ii) the formulation of the design problem as a global optimization one rather than a parametric trial-and-error approach, enabling to better control the AR and the XPD in the complete radiating semisphere, (iii) the study over a wide-band of the radiation features of the resulting planar array, as well as (iv) the fabrication tolerance analysis on both single element and array performance. The numerical results, concerned with the representative design of a WWCS radiator working at the central frequency of f 0 = 2.45 [GHz], have demonstrated that the proposed radiating structure provides 1. wide-band fractional impedance bandwidth (FBW 15%), which is 12.5 times larger than that in state-of-the-art solutions based on similar EM mechanisms [20]; 2. As for the arising WASA, the numerical assessment has pointed out the potential of the proposed layout of the elementary radiator for the realization of wide-band circularlypolarized WASAs. Finally, the reliability and robustness on the fabrication tolerances of the proposed antenna layout have been verified for both the stand-alone and the array arrangement. Furthermore, the experimental assessment of a PCB-manufactured prototype has verified the FW-simulated outcomes, confirming both the wide-band and the wide-beam features of the designed WWCS radiator (Figures 20 and 21). It should be pointed out that the proposed design concept and methodology are general since they can be applied to synthesize wide-band wide-beam CP radiators working in different operative bands. Indeed, the designer is given the freedom to choose the materials of the different layers as well as the desired target performance (i.e., bandwidth, HPBW, AR, and XPD) for the specific applicative scenario at hand. Future works, beyond the scope of the current manuscript, will be aimed at assessing the possibility to exploit the stripline technology to feed the antenna and at investigating the resulting advantages and drawbacks.
7,018.2
2023-01-18T00:00:00.000
[ "Physics" ]
Analysis of the resonant components in B0->J/\psi pi+pi- Interpretation of CP violation measurements using charmonium decays, in both the B0 and Bs systems, can be subject to changes due to"penguin"type diagrams. These effects can be investigated using measurements of the Cabibbo-suppressed B0->J/\psi pi+pi- decays. The final state composition of this channel is investigated using a 1.0/fb sample of data produced in 7 TeV pp collisions at the LHC and collected by the LHCb experiment. A modified Dalitz plot analysis is performed using both the invariant mass spectra and the decay angular distributions. An improved measurement of the B0->J/\psi pi+pi- branching fraction of (3.97 +/-0.09+/- 0.11 +/- 0.16)x10^{-5} is reported where the first uncertainty is statistical, the second is systematic and the third is due to the uncertainty of the branching fraction of the decay B- ->J/\psi K- used as a normalization channel. In the J/\psi pi+pi- final state significant production of f0(500) and rho(770) resonances is found, both of which can be used for CP violation studies. In contrast evidence for the f0(980) resonance is not found, thus establishing the first upper limit on the branching fraction product B(B0->J/\psi f0(980) x B(f0(980)->pi+ pi-)<1.1x10^{-6}, leading to an upper limit on the absolute value of the mixing angle of the f0(980)$ with the f0(500) of<31 degrees, both at 90% confidence level. Introduction CP violation measurements using neutral B meson decays into J/ψ mesons are of prime importance both for determinations of Standard Model (SM) parameters and searching for physics beyond the SM. In the case of B 0 decays, the final state J/ψ K 0 S is the most important for measuring sin 2β [1], while in the case of B 0 s decays, used to measure φ s , only the final states J/ψ φ [2][3][4], and J/ψ π + π − [5] have been used so far, where the largest component of the latter is J/ψ f 0 (980) [6]. The decay rate for these J/ψ modes is dominated by the color-suppressed tree level diagram, an example of which is shown for B 0 decays in Fig. 1(a), while penguin processes, an example of which is shown in Fig. 1(b), are expected to be suppressed. Theoretical predictions on the effects of such "penguin pollution" vary widely for both B 0 and B 0 s decays [7], so it is incumbent upon experimentalists to limit possible changes in the value of the CP violating angles measured using other decay modes. The decay B 0 → J/ψ π + π − can occur via a Cabibbo suppressed tree level diagram, shown in Fig. 2(a), or via several penguin diagrams. An example is shown in Fig. 2(b), while others are illustrated in Ref. [8]. These decays are interesting because they can also be used to measure or limit the amount of penguin pollution. The advantage in using the decay B 0 → J/ψ π + π − arises because the relative amount of pollution is larger. In the allowed decays, e.g. B 0 → J/ψ K 0 S , the penguin amplitude is multiplied by a factor of λ 2 Re iφ , where λ is the sine of the Cabibbo angle (≈ 0.22), while in the suppressed decays the factor becomes R e iφ , where R and R , and φ and φ are expected to be similar in size [8]. A similar study uses the decay B 0 s → J/ψ K 0 S [9]. CP violation measurements in the J/ψ π + π − mode utilizing B 0 − B 0 mixing determine sin 2β eff which can be compared to the well measured sin 2β. Differences can be used to estimate the magnitude of penguin effects. Knowledge of the final state structure is the first step in this program. Such measurements on sin 2β eff have been attempted in the system by using the J/ψ π 0 final state [10]. In order to ascertain the viability of such CP violation measurements we perform a full "Dalitz like" analysis of the final state. Regions in π + π − mass that correspond to spin-0 final states would be CP eigenstates. Final states containing vector resonances, such as the ρ(770) can be analyzed in a similar manner as was done for the decay B 0 s → J/ψ φ [2][3][4]. It is also of interest to search for the f 0 (980) contribution and to obtain information concerning the mixing angle between the f 0 (980) and the f 0 (500) 1 partners in the scalar nonet, as the latter should couple strongly to the dd system. Branching fractions for B 0 → J/ψ π + π − and J/ψ ρ 0 have previously been measured by the BaBar collaboration [11]. In this paper the J/ψ π + and π + π − mass spectra and decay angular distributions are used to determine the resonant and non-resonant components. This differs from a classical Dalitz plot analysis [12] because one of the particles in the final state, the J/ψ meson, has spin-1 and its three decay amplitudes must be considered. We first show that there are no evident structures in the J/ψ π + invariant mass, and then model the π + π − invariant mass with a series of resonant and non-resonant amplitudes. The data are then fitted with the coherent sum of these amplitudes. We report on the resonant structure and the CP content of the final state. Data sample and selection requirements The data sample consists of 1.0 fb −1 of integrated luminosity collected with the LHCb detector [13] using pp collisions at a center-of-mass energy of 7 TeV. The detector is a singlearm forward spectrometer covering the pseudorapidity range 2 < η < 5, designed for the study of particles containing b or c quarks. Components include a high precision tracking system consisting of a silicon-strip vertex detector surrounding the pp interaction region, a large-area silicon-strip detector located upstream of a dipole magnet with a bending power of about 4 Tm, and three stations of silicon-strip detectors and straw drift-tubes placed downstream. The combined tracking system has a momentum 2 resolution ∆p/p that varies from 0.4% at 5 GeV to 0.6% at 100 GeV, and an impact parameter resolution of 20 µm for tracks with large transverse momentum (p T ) with respect to the proton beam direction. Charged hadrons are identified using two ring-imaging Cherenkov (RICH) detectors. Photon, electron and hadron candidates are identified by a calorimeter system consisting of scintillating-pad and preshower detectors, an electromagnetic calorimeter and a hadronic calorimeter. Muons are identified by a system composed of alternating layers of iron and multiwire proportional chambers. The trigger consists of a hardware stage, based on information from the calorimeter and muon systems, followed by a software stage that applies a full event reconstruction [14]. Events are triggered by a J/ψ → µ + µ − decay, requiring two identified muons with opposite charge, p T (µ ± ) greater than 500 MeV, an invariant mass within 120 MeV of the J/ψ mass [15], and form a vertex with a fit χ 2 less than 16. After applying these requirements, there is a large J/ψ signal over a small background [16]. Only candidates with dimuon invariant mass between −48 MeV and +43 MeV relative to the observed J/ψ mass peak are selected, corresponding a window of about ±3σ. The requirement is asymmetric because of final state electromagnetic radiation. The two muons subsequently are kinematically constrained to the known J/ψ mass. Other requirements are imposed to isolate B 0 candidates with high signal yield and minimum background. This is accomplished by combining the J/ψ → µ + µ − candidate with a pair of pion candidates of opposite charge, and then testing if all four tracks form a common decay vertex. Pion candidates are each required to have p T greater than 250 MeV, and the scalar sum of the two transverse momenta, p T (π + ) + p T (π − ), must be larger than 900 MeV. The impact parameter (IP) is the distance of closest approach of a track to the primary vertex (PV). To test for inconsistency with production at the PV, the IP χ 2 is computed as the difference between the χ 2 of the PV reconstructed with and without the considered track. Each pion must have an IP χ 2 greater than 9. Both pions must also come from a common vertex with an acceptable χ 2 and form a vertex with the J/ψ with a χ 2 per number of degrees of freedom (ndf) less than 10 (here ndf equals five). Pion and kaon candidates are positively identified using the RICH system. Cherenkov photons are matched to tracks, the emission angles of the photons compared with those expected if the particle is an electron, pion, kaon or proton, and a likelihood is then computed. The particle identification makes use of the logarithm of the likelihood ratio comparing two particle hypotheses (DLL). For pion selection we require DLL(π − K) > −10. The four-track B 0 candidate must have a flight distance of more than 1.5 mm, where the average decay length resolution is 0.17 mm. The angle between the combined momentum vector of the decay products and the vector formed from the positions of the PV and the decay vertex (pointing angle) is required to be less than 2.5 • . Events satisfying this preselection are then further filtered using a multivariate analyzer based on a Boosted Decision Tree (BDT) technique [17]. The BDT uses six variable that are chosen in a manner that does not introduce an asymmetry between either the two muons or the two pions. They are the minimum DLL(µ − π) of the µ + and µ − , the minimum p T of the π + and π − , the minimum of the IP χ 2 of the π + and π − , the B 0 vertex χ 2 , the B 0 pointing angle, and the B 0 flight distance. There is discrimination power between signal and background in all of these variables, especially the B 0 vertex χ 2 . The background sample used to train the BDT consists of the events in the B 0 mass sideband having 5566 < m(J/ψ π + π − ) < 5616 MeV. The signal sample consists of two million B 0 → J/ψ (→ µ + µ − )π + π − Monte Carlo simulated events that are generated uniformly in phase space, using Pythia [18] with a special LHCb parameter tune [19], and the LHCb detector simulation based on Geant4 [20] described in Ref. [21]. Separate samples are used to train and test the BDT. The distributions of the BDT classifier for signal and background are shown in Fig. 3. To minimize a possible bias on the signal acceptance due to the BDT, we choose a relatively loose requirement of the BDT classifier > 0.05 which has a 96% signal efficiency and a 92% background rejection rate. The invariant mass of the selected J/ψ π + π − combinations, where the dimuon pair is constrained to have the J/ψ mass, is shown in Fig. 4 Figure 4: Invariant mass of J/ψ π + π − combinations. The data are fitted with a double-Gaussian signal and several background functions. The (red) solid double-Gaussian function centered at 5280 MeV is the B 0 signal, the (brown) dotted line shows the combinatorial background, the (green) short-dashed shows the B − background, the (purple) dot-dashed line shows the contribution of B 0 s → J/ψ π + π − decays, the (black) dot-long dashed is the sum of B 0 s → J/ψ η (→ ργ) and B 0 s → J/ψ φ(→ π + π − π 0 ) backgrounds, the (light blue) long-dashed is the B 0 → J/ψ K − π + reflection, and the (blue) solid line is the total. the B 0 s and B 0 masses on top of the background. Double-Gaussian functions are used to fit both signal peaks. They differ only in their mean values, which are determined by the data. The core Gaussian width is also allowed to vary, while the fraction and width ratio of the second Gaussian is fixed to that obtained in the fit of B 0 s → J/ψ φ events. (The details of the fit are given in Ref. [6].) Other components in the fit model take into account background contributions. One source is from B − → J/ψ K − decays, which contributes when the K − is misidentifed as a π − and then combined with a random π + ; the smaller J/ψ π − mode contributes when it is combined with a random π + . The next source contains B 0 s → J/ψ η (→ ργ) and B 0 s → J/ψ φ(→ π + π − π 0 ) decays where the γ and the π 0 are ignored respectively. Finally there is a B 0 → J/ψ K − π + reflection where the K − is misidentified as π − . Here and elsewhere charged conjugated modes are included when appropriate. The exponential combinatorial background shape is taken from same-sign combinations, that are the sum of J/ψ π + π + and J/ψ π − π − candidates. The shapes of the other components are taken from the simulation with their normalizations allowed to vary. The fit gives 5287 ± 112 signal and 3212 ± 80 background candidates within ±20 MeV of the B 0 mass peak, where a K 0 S veto, discussed later, is applied. We use the well measured B − → J/ψ K − mode as a normalization channel to determine the branching fractions. To minimize the systematic uncertainty from the BDT selection, we employ a similar selection on B − → J/ψ K − decays after requiring the same preselection except for particle identification criteria on the K − candidates. Similar variables are used for the BDT except that the variables describing the combination of π + and π − in the J/ψ π + π − final state are replaced by ones describing the K − meson. For BDT training, the signal sample uses simulated events and the background sample consists of the data events in the region 5400 < m(J/ψ K − ) < 5450 MeV. The resulting invariant mass distribution of the candidates satisfying BDT classifier > 0.05 is shown in Fig. 5. Fitting the distribution with a double-Gaussian function for the signal and linear function for the background gives 350,727 ± 633 signal and 4756 ± 103 background candidates within ±20 MeV of the B − mass peak. Analysis formalism We apply a formalism similar to that used in Belle's analysis [22] of B 0 → K − π + χ c1 decays and later used in LHCb's analysis of B 0 s → J/ψ π + π − decays [6]. The decay B 0 → J/ψ π + π − , with J/ψ → µ + µ − , can be described by four variables. These are taken to be the invariant mass squared of J/ψ π + (s 12 ≡ m 2 (J/ψ π + )), the invariant mass squared of π + π − (s 23 ≡ m 2 (π + π − )), where we use label 1 for J/ψ , 2 for π + and 3 for π − , the J/ψ helicity angle (θ J/ψ ), which is the angle of the µ + in the J/ψ rest frame with respect to the J/ψ direction in the B 0 rest frame, and the angle between the J/ψ and π + π − decay planes (χ) in the B 0 rest frame. To improve the resolution of these variables we perform a kinematic fit constraining the B 0 and J/ψ masses to their nominal values [15], and recompute the final state momenta. To simplify the probability density function, we analyze the decay process after integrating over χ, which eliminates several interference terms. The decay model for The overall probability density function (PDF) given by the sum of signal, S, and background, B, functions is where f sig is the fraction of the signal in the fitted region and ε is the detection efficiency. The fraction of the signal is obtained from the mass fit and is fixed for the subsequent analysis. The normalization factors are given by N sig = ε(s 12 , s 23 , θ J/ψ )S(s 12 , s 23 , θ J/ψ ) ds 12 ds 23 d cos θ J/ψ , The event distribution for m 2 (π + π − ) versus m 2 (J/ψ π + ) in Fig. 6 shows obvious structure in m 2 (π + π − ). To investigate if there are visible exotic structures in the J/ψ π + system as claimed in similar decays [23], we examine the J/ψ π + mass distribution shown in Fig. 7 (a). No resonant effects are evident. Figure 7 (b) shows the π + π − mass distribution. There is a clear peak at the ρ(770) region, a small bump around 1250 MeV, but no evidence for the f 0 (980) resonance. The favored B 0 → J/ψ K 0 S decay is mostly rejected by the B 0 vertex χ 2 selection, but about 150 such events remain. We eliminate them by excluding the candidates that have |m( Figure 7: Distribution of (a) m(J/ψ π + ) and (b) m(π + π − ) for B 0 → J/ψ π + π − candidate decays within ±20 MeV of B 0 mass shown with the solid line. The (red) points with error bars show the background contribution determined from m(J/ψ π + π − ) fits performed in each bin. The signal function The signal function for B 0 is taken to be the coherent sum over resonant states that can decay into π + π − , plus a possible non-resonant S-wave contribution 3 where A R i λ (s 12 , s 23 , θ J/ψ ) is the amplitude of the decay via an intermediate resonance R i with helicity λ. Each R i has an associated amplitude strength a R i λ for each helicity state λ and a phase φ R i λ . Note that the spin-0 component can only have a λ = 0 term. The amplitudes for each i are defined as where P B is the J/ψ momentum in the B 0 rest frame and P R is the momentum of either of the two pions in the dipion rest frame, m B is the B 0 mass, F are the B 0 meson and R resonance Blatt-Weisskopf barrier factors [24], L B is the orbital angular momentum between the J/ψ and π + π − system, and L R is the orbital angular momentum in the π + π − decay and is equal to the spin of resonance R because pions have spin-0. Since the parent B 0 has spin-0 and the J/ψ is a vector, when the π + π − system forms a spin-0 resonance, L B = 1 and L R = 0. For π + π − resonances with non-zero spin, L B can be 0, 1 or 2 (1, 2 or 3) for L R = 1(2) and so on. We take the lowest L B as the default and consider the other possibilities in the systematic uncertainty. The Blatt-Weisskopf barrier factors F For the B meson z = r 2 P 2 B , where the hadron scale r is taken as 5.0 GeV −1 , and for the R resonance z = r 2 P 2 R with r taken as 1.5 GeV −1 [25]. In both cases z 0 = r 2 P 2 0 where P 0 is the decay daughter momentum calculated at the resonance pole mass. The angular term, T λ , is obtained using the helicity formalism and is defined as where d is the Wigner d-function, J is the resonance spin, θ ππ is the π + π − resonance helicity angle which is defined as the angle of the π + in the π + π − rest frame with respect to the π + π − direction in the B 0 rest frame and calculated from the other variables as The J/ψ helicity dependent term Θ λ (θ J/ψ ) is defined as The function A R (s 23 ) describes the mass squared shape of the resonance R, that in most cases is a Breit-Wigner (BW) amplitude. Complications arise, however, when a new decay channel opens close to the resonant mass. The proximity of a second threshold distorts the line shape of the amplitude. This happens for the f 0 (980) resonance because the K + K − decay channel opens. Here we use a Flatté model [26] which is described below. The BW amplitude for a resonance decaying into two spin-0 particles, labeled as 2 and 3, is where m R is the resonance pole mass, Γ(s 23 ) is its energy-dependent width that is parametrized as Here Γ 0 is the decay width when the invariant mass of the daughter combinations is equal to m R . The Flatté model is parametrized as The constants g ππ and g KK are the f 0 (980) couplings to ππ and KK final states respectively. The ρ factors account for the Lorentz-invariant phase space and are given as For non-resonant processes, the amplitude A(s 12 , s 23 , θ J/ψ ) is derived from Eq. 4, considering that the π + π − system is S-wave (i.e. L R = 0, L B = 1) and A R (s 23 ) is constant over the phase space s 12 and s 23 . Thus, it is parametrized as Detection efficiency The detection efficiency is determined from a sample of two million B 0 → J/ψ (→ µ + µ − )π + π − simulated events that are generated uniformly in phase space. Both s 12 and s 13 are centered at about 18.4 GeV 2 . We model the detection efficiency using the symmetric dimensionless Dalitz plot observables These variables are related to s 23 since The acceptance in cos θ J/ψ is not uniform, but depends on s 23 , as shown in Fig. 8. If the efficiency was independent of s 23 , then the curves would have the same shape. On the other hand, no clear dependence on s 12 is seen. Thus the efficiency model can be expressed as To study the cos θ J/ψ acceptance, we fit the cos θ J/ψ distributions from simulation in 24 bins of m 2 (π + π − ) with the function giving 24 values of a as a function of m 2 (π + π − ). The resultant distribution shown in Fig. 9 can be described by an exponential function a(s 23 ) = exp(a 1 + a 2 s 23 ), with a 1 = −1.48 ± 0.20 and a 2 = (−1.45 ± 0.33) GeV −2 . Figure 9: Exponential fit to the acceptance parameter a(s 12 ) used in Eq. 18. Equation 18 is normalized with respect to cos θ J/ψ . Thus, after integrating over cos θ J/ψ , Eq. 17 becomes This term of the efficiency is parametrized as a symmetric fourth order polynomial function given by where the i are the fit parameters. Figure 10 shows the polynomial function obtained from a fit to the Dalitz-plot distributions of simulated events. The projections of the fit are shown in Fig. 11 and the resulting parameters are given in Table 1. Candidates/ (0.1 GeV ) Figure 11: Projections onto (a) m 2 (J/ψ π + ) and (b) m 2 (π + π − ) of the simulated Dalitz plot used to determine the efficiency parameters. The points represent the simulated event distributions and the curves the projections of the polynomial fits. Background composition Backgrounds from B decays into J/ψ final states have already been discussed in Section 2. The main background source is combinatorial and its shape can be determined from the same-sign π ± π ± combinations within ±20 MeV of the B 0 mass peak; this region also contains the small B − background. In addition, there is background arising from partially reconstructed B 0 , and a B 0 → J/ψ K − π + reflection, which cannot be present in same-sign combinations. We use simulated samples of inclusive B 0 s decays, and exclusive B 0 → J/ψ K * 0 (892) and B 0 → J/ψ K * 0 2 (1430) decays to model the additional backgrounds. The background fraction of each source is studied by fitting the J/ψ π + π − candidate invariant mass distributions in bins of m 2 (π + π − ). The resulting background distribution in the ±20 MeV B 0 signal region is shown in Fig. 12. It is fit by histograms from the same-sign combinations and two additional simulations, giving a partially reconstructed B 0 s background of 12.8%, and a reflection background that is 5.2% of the total background. LHCb π π Figure 12: The m 2 (ππ) distribution of background. The (black) histogram with error bars shows the same-sign data combinations with additional background from simulation, the (blue) points with error bars show the background obtained from the mass fits, the (black) dashed line is the partially reconstructed B 0 s background, and the (red) dotted is the misidentified B 0 → J/ψ K − π + contribution. The background is parametrized as where the first part m(π + π − ) 2P R P B m B converts phase space from s 12 to cos θ ππ , and The variable ζ = 2(s 23 −s min )/(s max −s min )−1, where s min and s max give the fit boundaries, B 2 (ζ) is a fifth-order Chebychev polynomial with parameters b i (i = 1-5), and q(ζ) and p(ζ) are both second-order Chebychev polynomials with parameters c i (i=2, 3, 5, 6), and c 1 , and c 4 are free parameters. In order to better approximate the real background in the B 0 s signal region, the J/ψ π ± π ∓ candidates are kinematically constrained to the B 0 s mass. A fit to the same-sign sample, with additional background from simulation, determines b i , c i , m 0 and Γ 0 . Figure 13 shows the mass squared projections from the fit. The fitted background parameters are shown in Table 2. The 1 + α cos 2 θ J/ψ term is a function of the J/ψ helicity angle. The cos θ J/ψ distribution of background is shown in Fig. 14, and is fit with the function 1 + α cos 2 θ J/ψ that determines the parameter α = −0.38 ± 0.04. We have verified that α is independent of s 23 . Figure 14: distribution of the background in cos θ J/ψ resulting from J/ψ π + π − candidate mass fits in each bin of cos θ J/ψ . The curve represents the fitted function 1 + α cos 2 θ J/ψ . Fit fractions While a complete description of the decay is given in terms of the fitted amplitudes and phases, the knowledge of the contribution of each component can be summarized by defining a fit fraction, F R λ , as the integration of the squared amplitude of R over the Dalitz plot divided by the integration of the entire signal function, 0.42 ± 0.03 c 5 1.7 ± 0.8 c 6 2.5 ± 0.8 Note that the sum of the fit fractions over all λ and R is not necessarily unity due to the potential presence of interference between two resonances. If the Dalitz plot has more destructive interference than constructive interference, the total fit fraction will be greater than one. Interference term fractions are given by and the sum of the two is Note that interference terms between different spin-J states vanish, because the d J λ0 angular functions in A R λ are orthogonal. The statistical errors of the fit fractions depend on the statistical errors of every fitted magnitude and phase, and their correlations. Therefore, to determine the uncertainties the covariance matrix and parameter values from the fit are used to generate 500 sample parameter sets. For each set, the fit fractions are calculated. The distributions of the obtained fit fractions are described by bifurcated Gaussian functions. The widths of the Gaussians are taken as the statistical errors on the corresponding parameters. The correlations of fitted parameters are also taken into account. Final state composition 4.1 Resonance models To study the resonant structures of the decay B 0 → J/ψ π + π − we use those combinations with an invariant mass within ±20 MeV of the B 0 mass peak and apply a J/ψ K 0 S veto. The total number of remaining candidates is 8483, of which 3212 ± 80 are attributed to background. Possible resonances in the decay B 0 → J/ψ π + π − are listed in Table 3. In addition, there could be some contribution from non-resonant B 0 → J/ψ π + π − decays. Table 3: Possible resonances in the B 0 → J/ψ π + π − decay mode. Resonance Spin Helicity Resonance formalism The masses and widths of the BW resonances are listed in Table 4. When used in the fit they are fixed to these values except for the parameters of the f 0 (500) resonance which are constrained by their uncertainties. Besides the mass and width, the Flatté resonance shape has two additional parameters g ππ and g KK , which are also fixed in the fit to values obtained in our previous Dalitz analysis of B 0 s → J/ψ π + π − [6], where a large fraction of B 0 s decays are to J/ψ f 0 (980). The parameters are taken to be m 0 = 939.9 ± 6.3 MeV, g ππ = 199 ± 30 MeV and g KK /g ππ = 3.0 ± 0.3. All background and efficiency parameters are fixed in the fit. To determine the complex amplitudes in a specific model, the data are fitted maximizing the unbinned likelihood given as where N is the total number of candidates, and F is the total PDF defined in Eq. 1. The PDF is constructed from the signal fraction f sig , the efficiency model ε(s 12 , s 23 , θ J/ψ ), the background model B(s 12 , s 23 , θ J/ψ ), and the signal model S(s 12 , s 23 , θ J/ψ ). In order to ensure proper convergence using the maximum likelihood method, the PDF needs to be normalized. This is accomplished by first normalizing the J/ψ helicity dependent part ε(s 23 , θ J/ψ )Θ λ (θ J/ψ ) over cos θ J/ψ by analytical integration. This integration results in additional factors as a function of s 23 . We then normalize the mass dependent part multiplied by the additional factors using numerical integration over 500×500 bins. The fit determines the relative amplitude magnitudes a R i λ and phases φ R i λ defined in Eq. 3; we choose to fix a ρ(770) 0 to 1. As only relative phases are physically meaningful, one phase in each helicity grouping has to be fixed; we choose to fix those of the f 0 (500) and the ρ(770) (|λ| = 1) to 0. In addition, since the final state J/ψ π + π − is a self-charge-conjugate mode and as we do not determine the B flavor, the signal function is an average of B 0 and B 0 decays. If we do not consider π + π − partial waves of a higher order than D-wave, then we can express the differential decay rate derived from Eqs. 3, 4 and 8 in terms of S-, P-, and D-waves including helicity 0 and ±1 dΓ dm ππ d cos θ ππ d cos θ J/ψ for B 0 decays, where A s k λ and φ s k λ are the sum of amplitudes and reference phase for the spin-k resonance group, respectively. The B 0 function for decays is similar, but θ π + π − and θ J/ψ are changed to π − θ π + π − and π − θ J/ψ respectively, as a result of using π − and µ − to define the helicity angles, yielding Summing Eqs. 28 and 29 results in cancellation of the interference involving the λ = 0 terms for spin-1, and the λ = ±1 terms for spin-2, as they appear with opposite signs for B 0 and B 0 decays. Therefore, we have to fix one phase in spin-1 (λ = 0) group (φ s P 0 ) and one in spin-2 (λ = ±1) group (φ s D ±1 ); the phases of ρ(770) (λ = 0) and f 2 (1270) (λ = ±1) are fixed to zero. The other phases in each corresponding group are relative to that of the fixed resonance. Fit results To find the best model, we proceed by fitting with all the possible resonances and a nonresonance (NR) component, then subsequently remove the most insignificant component one at a time. We repeat this procedure until each remaining contribution has more than 3 statistical standard deviation (σ) significance. The significance is estimated from the fit fraction divided by its statistical uncertainty. The best fit model contains six resonances, the f 0 (500), f 0 (980), f 2 (1270), ρ(770), ρ(1450), and ω(782). In order to compare the different models quantitatively an estimate of the goodness of fit is calculated from three-dimensional partitions of the one angular and two mass squared variables. We use the Poisson likelihood χ 2 [28] defined as where n i is the number of events in the three dimensional bin i and x i is the expected number of events in that bin according to the fitted likelihood function. A total of 1021 bins (N bin ) are used to calculate the χ 2 , based on the variables m 2 (J/ψ π + ), m 2 (π + π − ), and cos θ J/ψ . The χ 2 /ndf and the negative of the logarithm of the likelihood, −lnL, of the fits are given in Table 5; ndf is equal to N bin − 1 − N par , where N par is the number of fitting parameters. The difference between the best fit results and fits with one additional component is taken as a systematic uncertainty. Figure 15 shows the best fit model projections of m 2 (π + π − ), m 2 (J/ψ π + ), cos θ J/ψ and m(π + π − ). We calculate the fit fraction of each component using Eq. 24. For a P-or D-wave resonance, we report its total fit fraction by summing all the helicity components, and the fraction of the helicity λ = 0 component. The results are listed in Table 6. Systematic uncertainties will be discussed in Section 6. Two interesting ratios of fit fractions are (0.93 +0.37+0.47 −0.22−0.23 )% for ω(782) to ρ(770), and (9.5 +6.7 −3.4 ± 3.0)% for f 0 (980) to f 0 (500). The fit fractions of the interference terms are computed using Eq. 25 and listed in Table 7. Table 8 shows the resonant phases from the best fit. For the systematic uncertainty study, Table 9 shows the fit fractions of components for the best model with one additional resonance. Figure 15: Dalitz fit projections of (a) m 2 (π + π − ), (b) m 2 (J/ψ π + ), (c) cos θ J/ψ and (d) m(π + π − ) for the best model. The points with error bars are data, the signal fit is shown with a (red) dashed line, the background with a (black) dotted line, and the (blue) solid line represents the total. In (a) and (d), the shape variations near the ρ(770) mass is due to ρ(770) − ω(782) interference, and the dip at the K 0 S mass [15] is due to the K 0 S veto. Components 160 ± 48 f 0 (500) 0 (fixed) Helicity angle distributions We show the helicity angle distributions in the ρ(770) mass region defined within one full width of the ρ(770) resonance (the width values are given in Table 4) in Fig. 16. The cos θ J/ψ and cos θ ππ background subtracted and efficiency corrected distributions for this mass region are presented in Fig. 17. The distributions are in good agreement with the best fit model. Branching fractions Branching fractions are measured by normalizing to the well measured decay mode B − → J/ψ K − , which has two muons in the final state and has the same triggers as the B 0 → J/ψ π + π − decays. Assuming equal production of charged and neutral B mesons at the LHC due to isospin symmetry, the branching fraction is calculated as where N and denote the yield and total efficiency of the decay of interest. The branching fraction B(B − → J/ψ K − ) = (10.18 ± 0.42) × 10 −4 is determined from an average of recent Belle [29] and BaBar [30] measurements that are corrected with respect to the reported values, which assume equal production of charged and neutral B mesons at the Υ(4S), using the measured value of Γ(B + B − ) Γ(B 0 B 0 ) = 1.055 ± 0.025 [31]. Signal efficiencies are derived from simulations including trigger, reconstruction, and event selection components. Since the efficiency to detect the J/ψ π + π − final state is not uniform across the Dalitz plane, the efficiency is averaged according to the Dalitz model, where the best fit model is used. The K 0 S veto efficiency is also taken into account. Small corrections are applied to account for differences between the simulation and the data. We measure the kaon and pion identification efficiencies with respect to the simulation using D * + → π + D 0 (→ K − π + ) events selected from data. The efficiencies are measured in bins of p T and η and the averages are weighted using the signal event distributions in the data. Furthermore, to ensure that the p and p T distributions of the generated B mesons are correct we weight the B − and B 0 simulation samples using B − → J/ψ K − and B 0 → J/ψ K * 0 data, respectively. Finally, the simulation samples are weighted with the charged tracking efficiency ratio between data and simulation in bins of p and p T of the track. The average of the weights is the correction factor. The total correction factors are below 1.04 and largely cancel between the signal and normalization channels. Multiplying the simulation efficiencies and correction factors gives the total efficiency (1.163±0.003±0.017)% for B 0 → J/ψ π + π − and (3.092±0.012±0.038)% for B − → J/ψ K − , where the first uncertainty is statistical and the second is systematic. Using N B − = 350,727 ± 633 and N B 0 = 5287 ± 112, we measure where the first uncertainty is statistical, the second is systematic and the third is due to the uncertainty of B(B − → J/ψ K − ). The systematic uncertainties are discussed in Section 6. Our measured value is consistent with and more precise than the previous BaBar measurement of (4.6 ± 0.7 ± 0.6) × 10 −5 [11]. Table 10 shows the branching fractions of resonant modes calculated by multiplying the fit fraction and the total branching fraction of B 0 → J/ψ π + π − . Since the f 0 (980) contribution has a significance of less than 3σ we quote also an upper limit of B B 0 → J/ψ f 0 (980) × B (f 0 (980) → π + π − ) < 1.1 × 10 −6 at 90% confidence level (CL); this is the first such limit. The limit is calculated assuming a Gaussian distribution as the central value plus 1.28 times the addition in quadrature of the statistical and systematic uncertainties. This branching ratio is predicted to be in the range (1 − 3) × 10 −6 if the f 0 (980) resonance is formed of tetra-quarks, but can be much smaller if the f 0 (980) is a standard quark anti-quark resonance [8]. Our limit is at the lower boundary of the tetra-quark prediction, and is consistent with a quark anti-quark resonance with a small mixing angle. In Section 7.2, we show that the mixing angle, describing the admixture of ss and light quarks, is less than 31 • at 90% CL. Systematic uncertainties The contributions to the systematic uncertainties on the branching fractions are listed in Table 11. Since the branching fractions are measured with respect to the B − → J/ψ K − mode, which has a different number of charged tracks than the decays of interest, a 1% systematic uncertainty is assigned due to differences in the tracking performance between data and simulation. Another 2% uncertainty is assigned because of the difference between two pions and one kaon in the final states, due to decay in flight, multiple scattering, Table 10: Branching fractions for each channel. The upper limit at 90% CL is also quoted for the f 0 (980) resonance which has a significance smaller than 3σ. The first uncertainty is statistical and the second the total systematic. and hadronic interactions. Small uncertainties are introduced if the simulation does not have the correct B meson kinematic distributions. We are relatively insensitive to any differences in the B meson p and p T distributions since we are measuring the relative rates. By varying the p and p T distributions we see at most a change of 0.5%. There is a 1.0% systematic uncertainty assigned for the relative particle identification efficiencies (0.5% per particle). These efficiencies have been corrected from those predicted in the simulation by using the data from D * + → π + D 0 (→ K − π + ). A 0.6% uncertainty is included for the J/ψ π − π + efficiency, estimated by changing the best model to that including all possible resonances. The B 0 signal yield is changed by 0.5% when the shape of the combinatorial background is changed from an exponential to a linear function. The total systematic uncertainty is obtained by adding each source of systematic uncertainty in quadrature as they are uncorrelated. In addition, the largest source is 4.1% due to the uncertainty of B(B − → J/ψ K − ) which is quoted separately. The sources of the systematic uncertainties on the results of the Dalitz plot analysis are summarized in Table 12. For the uncertainties due to the acceptance or background modeling, we repeat the data fit 100 times where the parameters of acceptance or background modeling are generated according to the corresponding covariance matrix. We also study the acceptance function by changing the minimum IP χ 2 requirement from 9 to 12.5 on both of the pion candidates. As shown previously [6], this increases the χ 2 of the fit to the angular distributions by one unit. The acceptance function is then applied to the data with the original minimum IP χ 2 selection of 9, and the likelihood fit is redone and the uncertainties are estimated by comparing the results with the best fit model. The larger of the two variations is taken as uncertainty due to the acceptance. We study the effect of ignoring the experimental mass resolution in the fit by comparing fits between different pseudo-experiments with and without the resolution included. As the widths of the resonances we consider are much larger than the mass resolution, we find that the effects are negligible except for the ω(782) resonance whose fit fraction is underestimated by (0.09 ± 0.08)%. Thus, we apply a 0.09% correction to the ω(782) ±3.0 fraction and assign an additional ±0.08% in the acceptance systematic uncertainty. The results shown in the previous sections already include this correction. In the default fit, the signal fraction f sig = 0.621 ± 0.009, defined in Eq. 1 is fixed; we vary its value within its error to estimate the systematic uncertainty. The change is added in quadrature with the background modeling uncertainties. The uncertainties due to the fit model include adding each resonance that is listed in Table 4 but not used in the best model, changing the default values of L B in P-and D-wave cases, varying the hadron scale r parameters for the B meson and R resonance to 3.0 GeV −1 for both, replacing the f 0 (500) model by a Zhou and Bugg function [33,34] and using the alternate Gounaris and Sakurai model [35] for ρ resonances. Then the largest variations among those changes are assigned as the systematic uncertainties for modeling (see Table 12). Finally, we repeat the data fit by varying the mass and width of resonances (see Table 4) within their errors one at a time, and add the changes in quadrature. [15], except for the f 0 (500) [27]. Isospin Vector particle Vector mass ( MeV) Scalar particle Scalar mass ( MeV) 0 Thus the f 0 (500) state is firmly established in B 0 → J/ψ π + π − decays. As discussed in the introduction, a region with only S-and P-waves is preferred for measuring sin 2β eff . The best fit model demonstrates that the mass region within ±149 MeV (one full width) of the ρ(770) mass contains only (0.72 ± 0.09)% D-wave contribution, thus this region can be used for a clean CP measurement. The S-wave in this region is (11.9±1.7)%, where the fraction is the sum of individual fit fractions and the interference. 7.2 Mixing angle between f 0 (980) and f 0 (500) The scalar nonet is quite an enigma. The mysteries are summarized in Ref. [36], and in the "Note on scalar mesons" in the PDG [15]. Let us contrast the masses of the lightest vector mesons with those of the scalars, listed in Table 13. For the vector particles, the ω and ρ masses are nearly degenerate and the masses increase as the s-quark content increases. For the scalar particles, however, the mass dependence differs in several ways which requires an explanation. Some authors introduce the concept of qqqq states or superpositions of the four-quark state with the qq state. In either case, the I = 0 f 0 (500) and the f 0 (980) are thought to be mixtures of the underlying states whose mixing angle has been estimated previously (see Ref. [8] and references contained therein). Assuming that the ππ and KK decays are dominant we obtain B f 0 (980) → π + π − = (46 ± 6) %, where we have assumed that the only other decays are to π 0 π 0 , half of the π + π − rate, and to neutral kaons, taken equal to charged kaons. We use B (f 0 (500) → π + π − ) = 2 3 , which results from isospin Clebsch-Gordon coefficients, and assuming that the only decays are into two pions. Since we have only an upper limit on the J/ψ f 0 (980) final state, we will only find an upper limit on the mixing angle, so if any other decay modes of the f 0 (500) (f 0 (980)) exist, they would make the limit more (less) stringent. Our limit then is B B 0 → J/ψ f 0 (500) Φ(500) Φ(980) < 0.35 at 90% confidence level, which translates into a limit |ϕ m | < 31 • at 90% confidence level. Conclusions We have studied the resonance structure of B 0 → J/ψ π + π − using a modified Dalitz plot analysis where we also include the decay angle of the J/ψ meson. The decay distributions are formed from a series of final states described by individual π + π − interfering decay amplitudes. The largest component is the ρ(770) resonance. The data are best described by adding the f 2 (1270), f 0 (500), ω(782), ρ(1450) and f 0 (980) resonances, where the f 0 (980) resonance contributes less than 3σ significance. The results are listed in Table 6. We set an upper limit B B 0 → J/ψ f 0 (980) × B (f 0 (980) → π + π − ) < 1.1 × 10 −6 at 90% confidence level that favors somewhat a quark anti-quark interpretation of the f 0 (980) resonance. We also have firmly established the existence of the J/ψ f 0 (500) intermediate resonant state in B 0 decays, and limit the absolute value of the mixing angle between the two lightest scalar states to be less than 31 • at 90% confidence level. Our six-resonance best fit shows that the mass region within one full width of the ρ(770) contains mostly P-wave, (11.9 ± 1.7)% S-wave, and only (0.72 ± 0.09)% D-wave. Thus this region can be used to perform CP violation measurements, as the S-and P-wave components can be treated in the same manner as in the analysis of B 0 s → J/ψ φ [2][3][4]. The measured value of the asymmetry can be compared to that found in other modes such as B 0 → J/ψ K 0 in order to ascertain the possible effects due to penguin amplitudes.
11,690
2012-04-25T00:00:00.000
[ "Physics" ]
Quantum physics on a general Hilbert space In this chapter we generalize the results of Chapter 2 to infinite-dimensional Hilbert spaces. So let H be a Hilbert space and let B(H) be the set of all bounded operators on H. Here a notable point is that linear operators on finite-dimensional Hilbert spaces are automatically bounded, whereas in general they are not. Thus we impose boundedness as an extra requirement, beyond linearity. This is very convenient, because as in the finite-dimensional case, B(H) is a C*-algebra, cf. §C.1. At the same time, assuming boundedness involves no loss of generality whatsoever, since we can alway replace closed unbounded operators by bounded ones through the bounded transform, as explained in §B.21. Nonetheless, even the relatively easy setting of bounded operators leads to some technical complications we have to deal with. The proof for density operators is analogous. Defining the mean value a ψ of a with respect to the Born measure μ ψ by and similarly for ρ, using Theorem 4.3.2 we easily obtain a ψ = ψ, aψ ; (4.11) a ρ = Tr (ρa). (4.12) As an important special case, suppose that σ (a) = σ p (a) (i.e., each λ ∈ σ (a) is an eigenvalue); this always happens if H is finite-dimensional. Eq. (A.57) then gives where e λ is the projection onto the eigenspace H λ = {ψ ∈ H | aψ = λ ψ}. Thus μ ψ (λ ) = e λ ψ 2 , (4.13) and using the notation P ψ (a = λ ) for μ ψ (λ ), eq. (4.11) just becomes a ψ = ∑ λ ∈σ (a) λ · P ψ (a = λ ). (4.14) It is customary to extend the Born measure on σ (a) ⊂ R to a (probability) measure μ ψ on all of R by simply stipulating that μ ψ (Δ ) = μ ψ (Δ ∩ σ (a)); (4.15) we will often assume this and omit the prime. This obviously implies that μ ψ (Δ ) = 0 for any Borel set Δ ⊂ R disjoint from σ (a); in particular, if σ (a) is discrete, then μ ψ is concentrated on the eigenvalues λ of a, in that μ ψ (λ ). (4.16) To state an interesting property of the Born measure we need Hausdorff's solution to the relevant special case of the famous Hamburger Moment Problem: Theorem 4.5. If K ⊂ R is compact, then any finite measure μ on K is determined by its moments α n = K dμ(x) x n . (4.17) Using f (x) = x n in (4.6), we therefore obtain: Corollary 4.6. The Born measure μ ψ is determined by its moments α n = ψ, a n ψ . (4.18) More precisely, we need to be sure that numbers (α n ) of the kind (4.18) are the moments of some (probability) measure. This follows from the spectral theorem by running the above argument backwards, but one may also use the general solution of the Hamburger Moment Problem, which we here state without proof: Theorem 4.7. A sequence of real numbers (α n ) forms the moments of some measure μ on R iff for all N ∈ N and (β 1 ,..., β N ) ∈ C N one has ∑ N n,m=0 β nβ m α n+m ≥ 0. Furthermore, if there are constants C and D such that |α n | ≤ CD n n!, then μ is uniquely determined by its moments (α n ). These conditions are easily checked from (4.18). If a is unbounded, but still assumed to be self-adjoint (in the sense appropriate for unbounded operators, cf. Definition B.70), the spectrum σ (a) remains real (see Theorem B.93) but it is no longer compact. Nonetheless, the Born measure on σ (a) may be constructed in almost exactly the same way as in the bounded case, this time invoking Corollary B.21 and Theorem B.158 instead of Theorems 4.2 and B.94, respectively. Corollary 4.4 then holds almost verbatim for the unbounded case: Corollary 4.8. Let H be a Hilbert space, let a * = a, and let ψ ∈ H be a unit vector. There exists a unique probability measure μ ψ on the spectrum σ (a) such that (4.19) Also, eqs. (4.7) and (4.9) hold, as does (4.8), with f ∈ C 0 (σ (a)). There is no need to worry about domains, since even if a is unbounded, f (a) is bounded for f ∈ C b (σ (a)), and hence also for f ∈ C 0 (σ (a)). The physical relevance of the Born measure is given by the Born rule: If an observable a is measured in a state ρ, then the probability P ρ (a ∈ Δ ) that the outcome lies in Δ ⊂ R is given by the Born measure μ ρ defined by a and ρ, i.e., (4.20) As in the finite-dimensional case, the Born measure may be generalized to families (a 1 ,..., a n ) of commuting self-adjoint operators. Assuming these are bounded, the C*-algebra C * (a 1 ,..., a n ) is defined in the obvious way, i.e., as the smallest C*algebra containing each a i , or, equivalently, as the norm-closure of the algebra of all finite polynomials in the (a 1 ,..., a n ). This C*-algebra is commutative, as a simple approximation argument shows: polynomials in the a i obviously commute, and this property extends to the closure by continuity of multiplication. However, even in the bounded case, the correct notion of a joint spectrum is not obvious. In order to motivate the following definition, it helps to recall Definition 1.4, Theorem C.24, and especially the last sentence before the proof of the latter, making the point that the spectrum σ (a) of a single (bounded) self-adjoint operator coincides with the image of the Gelfand spectrum Σ (C * (a)) in C under the map ω → ω(a). To justify this definition, we note that: • For n = 1, this definition reproduces the usual spectrum, cf. Theorem C.24. • For n > 1 and dim(H) < ∞, we recover the joint spectrum of Definition A.16. • For n > 1 and dim(H) = ∞, Weyl's Theorem B.91 generalizes in the obvious way: we have λ ∈ σ (a) iff there exists a sequence (ψ k ) of unit vectors in H with for each i = 1,..., n. The proof is similar. One way to see the second claim is to use Proposition C.14 joined with the observation that, as in the case of A = B(H) for finite-dimensional H, any pure state on a finite-dimensional C*-algebra A ⊂ B(H) is a vector state (2.42), too. To see this, we first specialize Theorem C.133 to the finite-dimensional case (where the proof becomes elementary), so that each state on C * (a) takes the form (2.33). Subsequently, we use the spectral decomposition (2.6), and use the definition of purity: Then ω υ i = ω for each i, so that ω is a vector state, say ω(b) = ψ, bψ where ψ is one of the υ i . Once we know this, suppose λ = (λ 1 ,..., λ n ) ∈ σ (a), with λ i = ω(a i ). Multiplicativity of ω implies that for any finite polynomial in n real variables we have ψ, p(a)ψ = p(λ ), which easily gives a i ψ = λ i ψ for each i; for example, take p(x) = (x i − λ i ) 2 , so that the previous equation gives (a i − λ i )ψ 2 = 0. Conversely, if λ is a joint eigenvalue of a, then by definition there exists a joint eigenvector ψ whose vector state ω(b) = ψ, bψ on C * (a) is multiplicative. Using this (perhaps contrived) notion of a joint spectrum, Theorem 2.19 now holds by construction also if dim(H) = ∞, where the pertinent isomorphism f → f (a) is given as in the single operator case, that is, by starting with polynomials and using a continuity argument to pass to arbitrary continuous functions. Theorem 2.18 and Corollary 4.4 then generalize to: Theorem 4.10. Let H be a Hilbert space, let a = (a 1 ,..., a n ) be a finite family of commuting bounded self-adjoint operators, and let ψ ∈ H be a unit vector. There exists a unique probability measure μ ψ on the joint spectrum σ (a) such that or, equivalently, for special Borel sets where the e Δ i = 1 Δ i (a i ) are the pertinent spectral projections (which commute). Similarly for density operators instead of pure states. If (some of) the operators a i are unbounded, we use the trick of §B.21 and pass to their bounded transforms b i , see Theorem B.152. We say that the b i commute iff the corresponding bounded operators b i do; this is equivalent to commutativity of all spectral projections of the a i . We then define, in self-explanatory notation, (4.25) This leads to Born measures on σ (a) defined either as in (4.23), with f ∈ C(σ (a)) replaced by f ∈ C 0 (σ (a)), cf. (4.19), or as in (4.24). , 1] n and hence σ (a) = R n . For a measurable region Δ ⊂ R n we then have Pauli's famous formula (4.27) for finding the particle in the region Δ , given that the system is in a pure state ψ. Density operators and normal states For example, let (υ i ) be a basis of H with associated one-dimensional projections If ω is assumed to be a state, then the additivity condition (4.28) implies (4.30) or, equivalently, using Definition B.6 etc. as well as the notation e F ≡ ∑ i∈F e i , lim F ω(e F ) = 1. (4.31) If H is separable, any orthogonal family (e i ) of projections is necessarily countable, and (4.28) is analogous to the countable additivity condition defining a measure. Proof. First, eq. (2.33) implies (4.28). To see this, take the trace with respect to some basis (υ j ) of H that is adapted to the family (e i ) in the sense that for each j, either e i υ j = υ j (i.e., υ j ∈ e i H) for one value of i, or e i υ j = υ j for all i. Then where the sum ∑ j is over those j for which υ j ∈ K ≡ ∨ i e i H. On the other hand, since the basis is adapted, we have where J F consists of those j for which υ j ∈ ∑ i∈F e i H. This gives (4.28). Conversely, assume ω is normal. For the e i in (4.28) we now take the projections (4.29) determined by some basis (υ i ). For each a ∈ B(H) we then have (4.32) Indeed, using Cauchy-Schwarz for the positive semi-definite form (a, b) = ω(a * b), as in (C.197), and using ∑ i e i = 1 H and hence ω(a) = ω(∑ i e i a) we have Since ω(e F ) + ω(e F c ) = ω(1 H ) = 1, eq. (4.31) gives lim F ω(e F c ) = 0, so that (4.33) gives (4.32). For each finite F ⊂ I, the operator e F a has finite rank and hence is compact. According to Theorem B.146, the restriction of ω : B(H) → C to the C*-algebra B 0 (H) of compact operators on H is induced by a trace-class operator ρ, which (from the requirement that ω be a state) must be a density operator. Hence ω(e F a) = Tr (ρe F a), and we finally have (4.34) To derive the final equality, we rewrite Tr (ρe F a) = Tr (e F aρ), cf. (A.78) and Proposition B.144, note that aρ ∈ B 1 (H), as shown in Corollary B.147, and observe that for To see this, simply compute the trace in the basis (υ i ) defining the projections e i through (4.29), so that Tr (e F b) = ∑ i∈F υ i , bυ i , and note that by Definition B.6, Finally, suppose ω(a) = Tr (ρ 1 a) = Tr (ρ 2 a) for each a ∈ B(H) and hence for each a ∈ B 0 (H). It follows from (B.476) that Tr (ρa) = 0 for all a ∈ B 0 (H) iff ρ = 0. Hence ρ 1 = ρ 2 , i.e., a normal state ω uniquely determines a density operator ρ. If ω is normal, we may therefore use the spectral resolution (2.6) of the corresponding density operator ρ, i.e., ρ = ∑ i p i |υ i υ i |, where (υ i ) is some basis of H consisting of eigenvectors of ρ (which exists because ρ is compact and self-adjoint), and the corrsponding eigenvalues satisfy p i ≥ 0 and ∑ i p i = 1; see the explanation after Definition B.148. Computing the trace in the same basis gives We may characterize normality in a number of other ways. First note that because of the duality B 1 (H) * ∼ = B(H) of Theorem B.146, cf. (B.477), we may equip B(H) with the w * -topology in its role as the dual of the trace-class operators B 1 (H), see §B.9; this means that a λ → a iff Tr (ρa λ ) → Tr (ρa) for each ρ ∈ B 1 (H), or, equivalently, for each ρ ∈ D(H), since each trace-class operator is a linear combination of at most four density operators, as follows from Lemma C.53 with (C.8) -(C.9). The w * -topology on B(H), seen as the dual of B 1 (H), is called the σ -weak topology. By Proposition B.46, the σ -weakly continuous linear functionals ϕ on B(H) are just those given by ϕ(a) = Tr (ρb) for some trace-class operator b ∈ B 1 (H). Secondly, B(H) is monotone complete, in the sense that each net (a λ ) of positive operators that is bounded (i.e., 0 ≤ a λ ≤ c · 1 H for some c > 0 and all λ ∈ Λ ) and increasing (in that a λ ≤ a λ whenever λ ≤ λ ) has a supremum a with respect to the standard ordering ≤ on B(H) + , which supremum coincides with the strong limit of the net (i.e., lim λ a λ ψ = aψ for each ψ ∈ H); the proof is the same as for Proposition B.98, and also here we write a λ a to describe this entire situation. Proof. We have seen 1 ↔ 3 ↔ 4, and 2 → 1 is obvious, so establishing 3 → 2 would complete the proof. To this effect, we first note that because the sum (4.35) is convergent, for ε > 0 we may find a finite subset F ⊂ I for which ∑ i / Consequently, for such λ , This shows that lim λ |Tr (ρ(a λ − a))| = 0, so that assumption 3 implies no. 2. We denote the normal state space of B(H), i.e., the set of all normal states on B(H) by S n (B(H)). It is easy to see from Definition B.148 that S n (B(H)) is a convex (but not necessarily compact!) subset of the total state space S(B(H)). Corollary 4.14. The relation ω(a) = Tr (ρa) induces an isomorphism of convex sets (i.e., ω ↔ ρ). Furthermore, for the corresponding pure states we have i.e., any pure state ω on B 0 (H), as well as any normal pure state on B(H), is given by The proof of (4.38) is practically the same as in the finite-dimensional case. From Theorem B.146 we obtain another characterization of S n (B(H)) and hence of D(H): in the sense that any (pure) state ω on B 0 (H) has a unique normal extension to a (pure) state ω on B(H), given by the same density operator ρ that yields ω. It can be shown that any state ω ∈ S(B(H)) has a convex decomposition where t ∈ [0, 1], ω n is a normal state, and ω s is called a singular state. In particular, since for t ∈ (0, 1) the state ω is mixed, a pure state is either normal or singular. Singular states are not as aberrant as the terminology may suggest: such states are routinely used in the physics literature and are typically denoted by |λ , where λ lies in the continuous spectrum of some self-adjoint operator (that has to be maximal for this notation to even begin to make sense, see §4.3 below). Examples of such "improper eigenstates" are |x and |p , which many physicists regard as idealizations. However, mathematically such states are at least defined, namely as singular pure states on B(H). The key to the existence of such states lies in Proposition C.15 and its proof, which should be reviewed now; we only need the case a * = a. Proposition 4.16. Let a = a * ∈ B(H) have non-empty continuous spectrum, so that there is some λ ∈ σ (a) that is not an eigenvalue of a. Then ω λ ( f (a)) = f (λ ) defines a pure state on A = C * (a), whose extension to B(H) by any pure state is singular. Proof. Normal pure states on B(H) take the form ω ψ (b) = ψ, bψ , where ψ ∈ H is a unit vector and b ∈ B(H). We know from Proposition C.14 that ω λ is multiplicative on C * (a). However, if some multiplicative state ω on C * (a) has the form ω = ω ψ , then ψ must be eigenvector of a; cf. the proof of Proposition 2.3. The Kadison-Singer Conjecture To obtain deeper insight into singular pure states, and as a matter of independent interest, we return to the Kadison-Singer problem, cf. §2.6. Recall that this problem asks if some abelian unital C*-algebra A ⊂ B(H) has the Kadison-Singer property, stating that a pure state ω A on A has a unique pure extension ω to B(H). Here the issue is uniqueness rather than existence, since at least one such extension exists: since A is necessarily unital (with 1 A = 1 H ) and ω A is a state on A, so that in particular (4.42) (4.43) The inverse of this map is simply the pullback of the inclusion A → B, i.e., ω B ∈ P(B) defines ω A ∈ P(A) by restriction, so that we have a bijection P(A) ∼ = P(B), ω A ↔ ω B . Since for any pair of C*-algebras A ⊆ B the pullback S(B) → S(A) is continuous (in the pertinent w * -topology), the map ω B → ω A is continuous. As in Lemma C.20, this implies that it is in fact a homeomorphism, so that A ∼ = B through the inclusion A → B. This gives A = B, and hence A is maximal. Maximality of A implies A = A, so that A is a von Neumann algebra, sharing the unit of B(H). To see the relevance of singular states for the Kadison-Singer problem, we first settle the normal case. We know what it means for a state on B(H) to be normal (cf. Definition 4.11 and Corollary 4.13); for arbitrary von Neumann algebras A ⊂ B(H) the situation is exactly the same: we define normality by (4.28) and characterize it by the equivalent properties in Corollary 4.13, where the σ -weak topology on A may be defined either as the one inherited from B(H), or, more intrinsically, and the w * -topology from the duality A = A * * , where the Banach space A * is the so-called predual of A, e.g., ∞ * ∼ = 1 and L ∞ (0, 1) * = L 1 (0, 1), cf. §B.9. Theorem 4.18. Let H be a separable Hilbert space and let ω A be a normal pure state on a maximal commutative unital C*-algebra A in B(H). Then ω A has a unique extension to a state ω on B(H), which is necessarily pure and normal. Proof. As noted after (4.41), a pure state on B(H) is either normal or singular. The possibility that ω A is normal whereas ω is singular is excluded by Corollary 4.13.3, so ω must be normal and hence given by a density operator. The proof of uniqueness is then the same as in the finite-dimensional case, cf. Theorem 2.21. We now recall the classification of maximal maximal abelian * -algebras (and hence of maximal abelian von Neumann algebras) A in B(H) up to unitary equivalence (cf. Theorem B.118). This classification is the relevant one for the Kadison-Singer problem, since, as is easily seen, A ⊂ B(H) has the Kadison-Singer property iff uAu −1 ⊂ B(uH) has it. The uniqueness of the finite-dimensional case will be lost: This classification sheds some more light on Theorem 4.18. Since L ∞ (0, 1) has no pure normal states and D n (C) has been dealt with in Theorem 2.21, the interesting case is ∞ . Using Corollary 4.13.3 (or the analysis below), it is easy to check that the normal pure states on ∞ are given by ω A ( f ) = f (x) for some x ∈ N; these are vector state of the kind ω A ( f ) = ψ, m f ψ with ψ = δ x , or, in other words, they are given by ω A ( f ) = Tr (ρm f ) with ρ = |δ x δ x |. We now invoke a fairly deep result: Proposition 4.20. A pure state ω on B(H) is singular iff one (and hence all) of the following equivalent conditions is satisfied: • ω(a) = 0 for each a ∈ B 0 (H); • ω(e) = 0 for each one-dimensional projection e; • ∑ i ω(e i ) = 0 for the projections e i = |υ i υ i | defined by some basis (υ i ). One direction is easy: a normal pure state certainly does not satisfy the condition in question. For example, given (2.42) one may take a = |ψ ψ|, which as a onedimensional projection lies in B 0 (H), so that ω ψ (a) = 1. We omit the other direction of the proof. We conclude from this proposition that a pure singular state on B( 2 ) cannot restrict to a normal pure state on ∞ , which reconfirms Theorem 4.18. We now study the Kadison-Singer property for each of the three cases in Theorem 4.19 (where the third will be an easy corollary of the first and the second). Since the proofs of the first two cases are formidable, we just sketch the argument. The statement about ∞ is the Kadison-Singer Conjecture, which dates from 1959 but was only proved in 2013. The first claim (which was already known to Kadison and Singer themselves) is equally remarkable, however, as is the contrast between the two parts of Theorem 4.21. In particular, Dirac's notation |λ may be ambiguous. The key to the proof of the first claim lies in the choice of a total countable family of normal states on L ∞ (0, 1), from which all pure states may be constructed by a limiting operation. Here we call a (countable) family (ω n ) n∈N of states on some C*-algebra A total if, for any self-adjoint a ∈ A, the conditions ω n (a) ≥ 0 for each n imply a ≥ 0 (the converse is trivial). For example, the well-known Haar basis (h n ) of L 2 (0, 1) provides such a family. The functions forming this basis are defined via some bijection β between the set of pairs (k, l) and N, e.g., β (k, l) = k + 2 l , by (4.44) Basic analysis then shows that the Haar functions h n form a basis of L 2 (0, 1) and that the associated vector states ω n on L ∞ (0, 1) form a total set, where obviously The relevance of total sets to the conjecture is explained by the following lemma. where co(T ) − is the w * -closure of the convex hull of T in A * or in S(A). Proof. The inclusion co(T ) − ⊆ S(A) is obvious, since T ⊆ S(A) and S(A) is a compact (and hence a closed) convex set. To prove the converse inclusion, suppose a = a * ∈ A and s ∈ R are such that ω(a) ≥ s for each ω ∈ T . Then ω(a − s · 1 A ) ≥ 0 and hence ω(a) ≥ s for each ω ∈ S(A). Using Theorem B.43 (of Hahn-Banach type), this property would lead to a contradiction if S(A) were not contained in co(T ) − . The second claim, which is the one we will use, follows from the first through a corollary of the Krein-Milman Theorem B.50, stating that if T ⊂ K is any subset of a compact convex set K such that K = co(T ) − , then ∂ e K ⊆ T − . This corollary may be proved (by contradiction) from Theorem B.43 in a similar way. Our next aim is to get rid of the closure in (4.49). The Haar basis yields a map h : N → S(L ∞ (0, 1)); (4.50) n → ω n , (4.51) with image T , i.e., the set of Haar states. Since S(A) is a compact Hausdorff space (in its w * -topology), the universal property (B.135) of theČech-Stone compactification β N of N implies that h extends (uniquely) to a continuous map whose image is compact and hence closed (since β N is compact). We now pass to the (even) more difficult case of ∞ ⊂ B( 2 ). Although this will not be used in the proof, it gives some insight to know which states on ∞ we are actually talking about, i.e., the singular pure states, and compare this with (4.53). Theorem 4.24. There is a bijective correspondence between states ω d on ∞ and finitely additive probability measures μ on N, where: 1. ω d is normal iff μ is countably additive (and hence is a probability measure). 2. ω d is pure iff μ corresponds to some ultrafilter U on N, in which case: ω d is normal iff U is principal (and hence singular iff U is free). This follows from case no. 5 in §B.9, notably eqs. (B.153) -(B.154). In other words, the pure states ω d on ∞ are given by ultrafilters U on N through the analogy with (4.53) is even clearer if we write f (n) = δ n , m f δ n ≡ ω n ( f ). If U = U n is a principal ultrafilter, n ∈ N, we thus recover the normal pure states We now show that that ∞ has the Kadison-Singer property, making ω (U) the only extension of ω (U) d . The proof relies on an extremely difficult lemma from linear algebra (formerly known as a paving conjecture). We first define a linear map D : M n (C) → D n (C) by D(a) ii = a ii , i = 1,..., n, and D(a) i j = 0 whenever i = j. |ω(e i ae j )| 2 ≤ ω(a * e i a)ω(e j ). (4.71) Since ω(e i ) = ω d (e i ) and ω d is a pure state (and hence is multiplicative), we have ω(e i ) ∈ {0, 1}, since e i is a projection. Moreover, in view of (4.68) and the normalization ω(1 H ) = 1, there must be exactly one value of i = 1,..., l, say i = i 0 , such that ω(e i 0 ) = 1, and ω(e i ) = 0 for all i = i 0 . Eqs. (4.70) -(4.71) therefore imply that ω(e i ae j ) = 0 iff i = j = i 0 . Using (4.68) once more, we see that ω(a) = ∑ i, j ω(e i ae j ) = ω(e i 0 ae i 0 ), so that |ω(a)| ≤ ω e i 0 ae i 0 ≤ 1 · ε a by (4.66). Letting ε → 0, we proved: provided that ω extends ω d , as before. This shows that ω is determined by ω d and hence is unique, completing the proof (sketch) of Theorem 4.21. Gleason's Theorem in arbitrary dimension To a large extent the thrust and difficulty of the proof of Gleason's Theorem 2.28 already lies in its finite-dimensional version, but some care is needed in the general case, and also Corollary 2.29 needs to be refined. A major point here is that Definition 2.23 has no unambiguous generalization to arbitrary Hilbert spaces. Proof. The proof of part 1 is practically the same as in finite dimension, except for the fact that in the proof of Lemma 2.33 the reference to Proposition A.23 should be replaced by Proposition B.79, upon which one obtains a bounded positive operator ρ for which (2.123) holds. The normalization condition (2.110) then yields Tr (ρ) = 1 if the trace is taken over any basis of H, and since ρ is positive this implies ρ ∈ B 1 (H), see §B.20 (complete additivity of P is just necessary to relate it to p). Unfortunately, the proof of part 2 exceeds the scope of this book (see Notes). In infinite dimension, Corollary 2.29 becomes more complicated, too; for one thing, Definition 2.26 of a quasi-state bifurcates into two possibilities. The one given still makes perfect sense and is natural from the point of view of Bohrification; to avoid confusion we call a map ω : B(H) → C satisfying the conditions in Definition 2.26 a strong quasi-state. In the context of Gleason's Theorem, a slightly different notion is appropriate: a weak quasi-state on B(H) satisfies Definition 2.26, except that linearity is only required on commutative C*-algebras in B(H) of the form C * (a), where a = a * ∈ B(H) (these are singly generated). Since commutative unital C*-subalgebras of B(H) are not necessarily singly generated, and a specific counterexample exists, weak quasi-states are not necessarily strong quasi-states. Proposition 4.30. The map ω → ω |P(H) gives a bijective correspondence between weak quasi-states ω on B(H) and finitely additive probability measures on P(H). Proof. For some finite family (e 1 ,..., e n ) of mutually orthogonal projections on H, add e 0 = 1 H − ∑ j e j if necessary and let a = ∑ n j=0 λ j e j , with all λ j ∈ R different. Then σ (a) = {λ 0 ,..., λ n }, so that C * (a) ∼ = C(σ (a) ∼ = C n+1 (cf. Theorem B.94) coincides with the linear span of the projections e j . If ω is a weak quasi-state, then it is linear on C * (a) and hence also on the e j , so that ω |P(H) is finitely additive. Conversely, let μ be a finitely additive probability measure on P(H). If a = a * ∈ B(H) is given, using the notation (B.328) we symbolically define ω on a by (4.79) More precisely, for any ε > 0 we use Corollary B.104 to define ω ε (a) = ∑ n i=1 λ i μ(e A i ) and let ω(a) = lim ε→0 ω ε (a); it follows from Lemma B.103 (or the theory underlying the Riemann-Stieltjes integral (4.79)) that this limit exists. Now let b, c ∈ C * (a), so that b = f (a) and c = g(a) for certain f , g ∈ C(σ (a)), and b + c = ( f + g)(a), cf. Theorem B.94. By (B.325) we therefore have Since this holds for every ε > 0, letting ε → 0 we obtain ω(b + c) = ω(b) + ω(c), making ω linear on C * (a). It is clear that the quasi-state ω thus obtained, on restriction to P(H) reproduces μ, making the map ω → ω |P(H) Another corollary of Gleason's Theorem is the Kochen-Specker Theorem, which we will explain in detail in Chapter 6, where it will also be proved in a different way. Cf. Definitions 6.1 and 6.3. To see that these conditions are equivalent to those stated in Theorem 4.32 (despite the impression that linearity on all commuting self-adjoint operators seems stronger than linearity on each C * (a)), extend ω to ω : B(H) → C by complex linearity, as in Definition 2.26.1, and note that dispersion-freeness implies positivity and hence continuity on each subalgebra C * (a) (cf. Theorem C.52 and Lemma C.4). We then see that the two conditions just stated imply that ω is multiplicative on C * (a), and hence pure, see Proposition C.14, which conversely implies that pure states on C * (a) are dispersion-free. We now prove Theorem 4.32.
7,841.4
2017-05-13T00:00:00.000
[ "Physics" ]
A Review of Process Parameters That Effect to Extrusion on Demand An additive Manufacturing (AM) process has been introduced to fabricate three-dimension objects from liquid polymers and powder particles e.g. polymers and metals, this process has been also applied to fabricate threedimension ceramic objects called paste materials which compose of liquid and solid components. To build additive model, the paste material is extruded via pressure force through the nozzle by using ram extrusion, then the model is constructed layer-by-layer on the table to form designed shape. To improve a capability of extrusion process, the Extrusion On Demand (EOD), that refers to the ability of paste extrusion to regulate start and stop, has been developed to improve an accuracy of material deposition. This paper presents a review of process parameters that effect to extrusion on demand such as ram velocity, Dwell time, paste property and extrusion mechanisms. Introduction Additive manufacturing (AM) is a technology that fabricate three-dimensional (3D) objects by adding material layer-by-layer directly from CAD (Computer Aided Design) model [1]. Traditional AM technologies have been used to fabricate 3D objects based on a raw material states that are liquid-based, solid-based, and powder-based, for example liquid polymers and powder particles including polymers and metals. These materials contains some disadvantages such as high melting point and high cost of raw materials [2]. To solve those disadvantages, paste materials have been applied to fabricate three-dimension parts in additive manufacturing. Pastes material is a kind of material composed of solid and liquid state [3]. Pastes are states that are made up from many different substances and formed by using extrusion method. The paste extrusions have been applied for various industries such as ceramic parts, cosmetic pencils, tiles, food manufacturing, animal feeds, and PTFE wires [4]. Extrusion is carried out by pressing pressure to ram extruder, as illustrated in Figure 1 (a). Normally, the ram extrusion consists of control system and extrusion device. In paste extrusion process, the pressure is generated to force the paste through a nozzle. The paste flow is regulated by controlling the ram movement, which moves down to extrude paste material. The extrudate is deposited layer-by-layer on the table to form designed shape. To improve a capability of extrusion process, Extrusion On Demand (EOD), that refer to the ability to regulate the start and stop of paste extrusion, has be developed to improve material deposition. This paper presents a review of process parameters that effect to extrusion on demand such as ram velocity, Dwell time, paste property and extrusion mechanisms. The paste extrusion process The Process of paste extrusion is generally carried out in three steps: paste preparation, Forming, and Finishing [3]. In paste preparation, powder components and liquid are mixed in suitable mass proportion to carry out a paste. These components are contained in the barrel. In forming, Pressure is generated to force the paste through a nozzle. Then, the paste is extruded from an extruder by using ram extrusion. The extrudate is deposited layer-by-layer to form a model. During finishing stage, the extrudate is solidified by thermal processing during this stage. Since a conventional extrusion method extrudes the paste by using ram extruder, the extrudate contained head and tail effect as shown in Figure 1 (b). The quality of designed shape depends on these effects. To achieve the accuracy and quality of designed shape, the conventional extrusion method has been improved to control the start and stop extrusion on demand in order to enhance the performance of the extrusion process. In additional, the ram velocity, Dwell time, paste property and extrusion mechanisms are main parameters that influence on the paste extrusion process. Ram velocity The paste can flow when the pressure is high enough to force the paste through the nozzle. The pressure force is generated by the ram extruder. The ram is driven by a hydraulic or mechanical system. Because of the relation between ram velocity and pressure force, it is important that the pressure force is generated sufficiently. Dwell time Since there is a delay at start and stop of extrusion, Dwell time is amount of time for start and stop extrusion that process need to compensate the gantry that remains stationary after the force is activated. Determining The Dwell time is important because the excessive Dwell time lead to accumulation of paste material at the start point of printing line. While a few Dwell time cause the nozzle to move forward before the extrusion of paste. Paste properties The paste materials were made up from both liquid and solid components, the paste is materials with both a liquid and a solid phase. Paste properties, especially the liquid content and paste viscosity are significant parameter that influences on extrusion process. Variation of liquid contained in paste material cause liquid phase migration that the liquid phase moves faster than the solid particle during extrusion process. So the paste regions of low liquid become drier [6]. Extrusion mechanism The ram extruder is widely used for conventional paste extrusion. According to a schematic diagram of ram extrusion as shown in Figure 2 (a), which consists of a ram extruder, barrel and nozzle. The paste material is pressed through a nozzle to generate the deposition process. The start and stop of extrusion is regulated by the ram movement. This conventional extrusion is difficult to fabricate complex shape because of the difficultly of determining time delay for the start and stop of extrusion. So, other mechanisms, used for dispensing technology, has been introduced to improve the extrusion on demand. Needle valve extrusion mechanism consists of a plunger, barrel, a shutter needle and nozzle as shown in Figure 2 (b). The shutter needle was set as a function of the opening and closing valve. Paste material was pressed by the force applied to plunger. Then, the shutter needle was moved up or down by pneumatic force to start and stop of extrusion process. Other, auger valve extrusion mechanism consists of servo motor, barrel and auger valve as shown in Figure 2 (c). This valve was used to extrude paste material by the rotating auger valve. The needle and auger valve have been widely used in dispensing of paste materials [7]. To improve a capability of extrusion performance, the ram velocity, Dwell time, paste property and extrusion mechanisms are main parameters that have been studies to investigate the effect on the ability extrusion on demand for paste extrusion processes as shown in table 1. Mason et al. (2007) modified the traditional ram extrusion mechanism in order to improve the extrusion on demand. Load cell housing was added to connect directly with the plunger, while plastic syringe was replaced by metal barrel. A new ram extrusion mechanism was implemented to print the line tests with a set of ram retreating velocities (0, 0.25, 0.5, 1, 1.5, 2, 2.5, and 3 mm/s) and extrusion forces of 308N. The results were found that the ram retreating velocities that are higher 2 mm/s able to stop on the extrusion on demand. The influence of the ram velocity on the extrusion force and liquid phase migration in Freeze from Extrusion Fabrication process was studied. Liu and Leu (2009) experimented to investigate liquid phase migration for aqueous Al 2 O 3 paste. The aqueous paste and five ram velocities (10, 5, 2, 1.5, 1 µm/s) were used for investigation in this study. The experimental results were shown that highly liquid phase migration was found when ram velocity was lower than 5 µm/s. Oake et al. (2009) present an experiment to investigate the Dwell times that effect to extrusion filament. In this experiment, the Dwell time is set as a function of the reference ram force. This force is applied to activate for paste extrusion. Values experiment are 50%, 55%, 60%, 65% and 70% of the reference ram force 450 N and 475 N. The results were shown that the discontinued filaments were presented at Dwell time less than 65%. In addition, the increasing of the reference ram force from 450 to 475 N was found the accumulation material at the start of filaments. This indicates that a row ram force should be used for extrusion process. Liu et al. (2013) conducted an experiment to study the influence of ram velocity and paste properties on extrusion process by applying series of ram velocities (2,5,10 and 15 µm/s) and three pastes with difference viscosity. The results were found that the extrusion pressure shown an increasing slowly with high velocities (10 and 15 µm/s) whereas the extrusion pressure shown an increasing rapidly at low velocity (2 and 5 µm/s). Therefore, at low velocity, liquid phase was moved into an extrudate. So the remaining paste in the barrel become drier. Drier paste in the barrel need high pressure force to press the paste flow. In addition, the ram velocity of 2 µm/s was experimented to study effect of paste viscosity on liquid phase migration. The results were found that liquid phase migration was found when a higher viscosity paste is used with the lower ram velocity. The paste viscosity had significance effect on extrusion process [12]. The high viscosity of paste materials means a high amount of solid particles. Therefore, high extrusion force had be needed to extrude materials whereas low viscosity paste lead to difficult to form the designed shape. Li et al. (2017) have conducted an experiment to study the influence of extrusion mechanisms, Dwell time and paste property on the start and stop of extrusion. The start Dwell time for ram extrusion, needle valve and auger valve mechanism was 450, 70 and 0 ms respectively. Those mechanisms were used to experiment the ability of extrusion on demand. Dash line printing tests were conducted for all three methods. The line printing tests were printed from right to left and compared the tail and head effect of the printed lines to indicate the capable extrusion start and stop by using the image dash line segments printed. The experimental results were shown that the dash line segments printed by the needle valve and auger valve methods have shorter tails than the ram extrusion method because the start and stop dwell time of the needle valve and auger valve methods are shorter than the ram extrusion method. Then, the filaments with short start Dwell time shown the accuracy of the start and stop of extrusion.   -- [11]  - - Conclusion The extrusion methods have been widely used to fabricate 3D objects from paste materials. Those materials are made up from liquid and solid particle. The ram extrusion method is widely used to form paste materials. In this method, pressure force was generated to press paste material through a nozzle by using ram extruder. To improve a capability of extrusion process. The four main parameters were reviewed. The ram velocity, Dwell time, paste property and extrusion mechanisms are those parameters that influence on the paste extrusion process. The ram extruder, needle valve and auger valve based extrusion method were experimented to print the line tests. Because of the shorter Dwell time, needle valve and auger valve shown the ability start and stop of extrusion on demand better than the ram extrusion.
2,601.2
2018-01-01T00:00:00.000
[ "Materials Science" ]
Advanced Driver-Assistance System (ADAS) for Intelligent Transportation Based on the Recognition of Traffic Cones . Great changes have taken place in automation and machine vision technology in recent years. Meanwhile, the demands for driving safety, efficiency, and intelligence have also increased significantly. More and more attention has been paid to the research on advanced driver-assistance system (ADAS) as one of the most important functions in intelligent transportation. Compared with traditional transportation, ADAS is superior in ensuring passenger safety, optimizing path planning, and improving driving control, especially in an autopilot mode. However, level 3 and above of the autopilot are still unavailable due to the complexity of traffic situations, for example, detection of a temporary road created by traffic cones. In this paper, an analysis of traffic-cone detection is conducted to assist with path planning under special traffic conditions. A special machine vision system with two monochrome cameras and two color cameras was used to recognize the color and position of the traffic cones. The result indicates that this novel method could recognize the red, blue, and yellow traffic cones with 85%, 100%, and 100% success rate, respectively, while maintaining 90% accuracy in traffic-cone distance sensing. Additionally, a successful autopilot road experiment was conducted, proving that combining color and depth information for recognition of temporary road conditions is a promising development for intelligent transportation of the future. Introduction With rapid economic development, new opportunities have emerged for the automobile industry. In recent years, both car ownership and driver numbers have increased sharply in China. According to the data from the Ministry of Communications, before 2018, China already had over 300 million vehicles and 400 million drivers [1], and with a fast increase in the number of vehicles, some serious traffic issues have become noticeable. First, traffic safety continues to be very challenging. Globally, more than 1.25 million people die due to traffic accidents annually, with a total number having reached over 38 million since the start of the automobile industry [2][3][4]. e situation in China is not optimistic because over 100 thousand people get injured or die in traffic accidents every year, costing the economy more than 10 billion Renminbi (RMB). Second, traffic jams have become more and more serious. is has become a global problem in both developed and developing countries due to the traffic approaching or exceeding road capacity. According to the 2019 report from AutoNavi, rush hour traffic jams occurred in over 57% of the cities in China, while 4% of the cities suffered heavy ones [5]. Traffic jams increase travel time, gasoline consumption, and exhaust emission while at the same time decrease driving safety tremendously. Advanced driver-assistance system (ADAS) (an important part of intelligent transportation) was developed to overcome the above problems [6]. With the development in the telecommunication services, sensing technologies, automation, and computer vision technologies, ADAS development has achieved positive results in traffic resource integration, real-time vehicle status, and driving environment monitoring [7][8][9][10]. Generally, ADAS consists of active safety and passive safety. Passive safety relies on certain devices, such as safety belts, airbags, and bumpers, to protect passengers and reduce damage [11]. However, passive safety cannot improve driving safety by itself because 93% of the traffic accidents are caused by the drivers' lack of awareness of the danger [12]. Also, it has been reported that 90% of the dangerous accidents could have been avoided if the drivers were warned just 1.5 seconds earlier [13]. Consequently, active safety (developed to sense and predict dangerous situations) has been considered an important part of modern vehicles. By exchanging data with other devices on the Internet of things (IoT), active safety modules can assist drivers in making decisions based on the overall traffic status and replace the traffic lights for adaptive scheduling of vehicles at intersections [14]. Active safety modules can also estimate the risk of current driving behaviors by analyzing dynamic information from nearby vehicles via telecommunication service and cloud computing. If the risk is high and might cause a collision, the vehicle can warn the driver to correct the driving behavior, and in urgent cases, the active safety modules can take over the control of the vehicle to avoid a traffic accident [15]. e latest active safety modules have achieved the identification of traffic signs by applying deep machine learning technology. As a result, a vehicle could recognize a traffic warning or limitation and remind the driver not to violate the traffic rules [16]. In response to the need for intelligent transportation, ADAS research has focused on autopilots, with many countries (especially the US, Japan, and some European countries) investing a lot of money and effort into their development and making outstanding achievements [17]. Vehicular ad hoc network (VANET) technology, which provides channels for collecting real-time traffic information and scheduled vehicle crossings in the intersection zones, offers a new approach to releasing traffic pressure when traditional governance cannot solve the congestion issue effectively. It reduces the average vehicle waiting time and improves traveling efficiency and safety by gathering proper traffic-related data and optimizing scheduling algorithms [18][19][20]. Many accidents caused by the driver's inattention to the traffic signs can be avoided if the warnings are noticed in advance. Traffic-sign recognition function, which includes traffic-sign detection and traffic-sign classification, has been developed to solve this issue via machine vision technology. Since the camera-captured images include a lot of useless information, sliding window technology has been used to locate the traffic sign region in the image. en, certain algorithms, such as histogram of oriented gradient (HOG), support vector machine (SVM), random forest, and convolutional neural network (CNN), are used for feature detection and classification [21][22][23]. With the sliding window technology being rather time-consuming, some researchers have proposed other solutions for locating traffic regions (i.e., region of interest (ROI)), which decreased average image processing time to 67 ms [24]. One of the most important functions of ADAS is collision avoidance, where warning technology senses potential accident risks based on certain factors, such as vehicle speed, space between vehicles, and so on [22]. By installing proper sensors, like radar, ultrasonic sensor, or infrared sensor, multiple target vehicles and objects within 150 m can be measured with precision and assessed rapidly for a safe distance [21,24]. One obvious challenge, however, is that space information may be missing in certain blind spots that sensors cannot detect [23]. To solve this problem, vehicle-to-vehicle (V2V) communication and Global Positioning System (GPS) have recently been introduced. Since then, collision avoidance warning has begun not only to be analyzed via passive measurements but also collected by active communication for its status data on the nearby vehicles [25]. Even though many different measures have been used in danger detection, one issue remains challenging. Colorful traffic cones that temporarily mark roads for road maintenance control or accident field protection are often hard to detect and process by the space sensors due to their small size. If neither the driver nor the ADAS notices the traffic cones on the road, serious human injuries and property damage may occur. Some fruitful research in detecting traffic cones has been conducted using cameras and LiDAR sensors, using such technologies as machine vision, image processing, and machine learning [26][27][28]. However, some problems have become noticeable. First, high-quality sensors like LiDAR are expensive, and manufacturers are not willing to install them without a sharp cost decrease. Second, machine learning technology requires a lot of system resources, and on-board computers are not sufficient. us, the overall objective of this study was to develop a costeffective machine vision system that can automatically detect road traffic cones based on the cone distribution to avoid any potential accidents. is method was able not only to recognize traffic cones on the roads but to sense their distance and assist the automatic vehicle control in navigating them smoothly. is required the development of algorithms for quick recognition of traffic cones by color and for sensing the corresponding distance data. Experiment Car and Traffic Cones. An experimental car was designed with a 2600 mm length, a 1500 mm width, and a 1650 mm height, and its powertrain was composed of a 4 Ah battery and a DC motor with 80 KW, as shown in Figure 1. e controlling system of the car contained an embedded computer (Intel i7 CPU, 8G RAM), vehicle controlling unit (VCU), battery management system (BMS), brake controller, DC motor controller, and a machine vision system, as shown in Figure 2. e embedded computer, which worked as the brain of the car, not only controlled the machine vision system to capture the road images but also sent appropriate commands to VCU after processing the road images and analyzing the car status. VCU performed as a bridge between the embedded computer and the hardware onboard. VCU collected real-time status data of the car, sending it to the embedded computer. At the same time, it controlled BMS, the DC motor controller, and the brake controller as they issued valid commands from the embedded computer. For safety reasons, the VCU rejected any invalid commands or any commands received in the presence of a component error. Each part of the controlling system communicated through the CAN bus with a 250 Kbps baud rate, except for the machine vision system, which exchanged data with the embedded computer through Ethernet. e red, blue, and yellow traffic cones that are widely used on the roads in China were 200 mm × 200 mm × 300 mm (length, width, and height, respectively) with a reflective stripe attached in the middle, as shown in Figure 1. e red and blue traffic cones were used for indicating the left and right edges of a temporary road, while the yellow ones specified the start and end of a road in this experiment. Figure 3 shows the Smart Eye B1 camera system (consisting of four cameras) chosen for this research. Two monochrome cameras, which composed a stereo vision system, were used for sensing real-time 3-dimensional environment data, whereas the color cameras were detecting color information. According to the specifications of the Smart B1 camera system, its error of space prediction is <6% within a detectable range of 0.5-60 m. Additionally, this camera system can automatically adjust white balance. e resolution for all cameras was set to 1280 * 720, and the frequency of all cameras was set to 12 fps. Two independent Ethernets with a 100 megabit bandwidth controlled the data exchange for the monochrome and color cameras. e camera was placed 1500 mm above the ground to simulate the field of view in a sedan. e example images are shown in Figure 4(a). Range Detection via Stereo Vision. In this experiment, two monochrome cameras were used to build a stereo vision. A point P (x, y, z) in a world coordinate system projected into the two cameras with the coordinates P left (xl, yl, zl) and P right (xr, yr, zr). Since the height of the two cameras was the same, the values of yl and yr were the same and the 3-dimensional coordinate could be changed into a 2-dimensional coordinate for analysis, as shown in Figure 5. f was the camera's focal length, while b was the baseline of the left and right cameras. According to the triangle similarity law, the following relation exists: From equation (1), the x, y, and z values can be calculated with the following equations: A depth image D (x, y), which included the object's distance information in each pixel, was generated by the z values as a 32 bit floating matrix that could be visualized via the handleDisparityPointByPoint () API from the camera system's Standard Development Kit (SDK). A processed depth image is presented in Figure 4(b), with the warmer color indicating the longer distance. e original depth image format was converted from the 32 bit floating matrix to a color image because the float data and pixel values exceeded 255 and were unavailable for display on the current operating system. Traffic Cone Detection. Traffic cone detection, which was developed using C++ language with an OpenCV library, consisted of four functions: color recognition, size and distance calculation, noise filtering, and the traffic cone marking. Color Recognition. All traffic cones had the same shape, size, and reflective stripes, except for their color. Since the differences between the yellow, red, and blue colors were obvious, they were able to distinguish from the color images by processing these images during the day time. e color detection algorithm is shown in equation (3). e red, green, and blue values in each pixel of the color image H (x, y) were used for ratio calculations that would determine this pixel Size and Distance Calculation. When all traffic cone pixels in image H (x, y) were marked, traffic cone's size and distance were calculated, as shown in equation (4). Size S was the number of pixels in one isolated traffic cone area in H (x, y), while D was the average gray value in the same area's depth image D (x, y). S � 0, as initial, Noise Filtering and Target Marking. Since various objects showed up in the color images with colors similar to those of the traffic cones, it was necessary to eliminate those as noise. Because the traffic cone size was in reverse proportion to the distance in the images, filtering of the fake traffic cone pixels was conducted based on the size S and average distance data D, as shown in equation (5). A traffic cone was ignored unless S was less than the threshold at distance D, and it was confirmed if S was equal to or larger than the threshold at D. Finally, minimal external rectangles were calculated to mark all of the existing traffic cones in the area as the detected traffic cones: is traffic cone, ifS ≥ threshold atD, not traffic cone, ifS ≥ threshold atD. Results and Discussion e experiment was separated into a color marking test and a distance matching test. e color marking test was mainly focused on the traffic cone recognition, whereas the distance matching test validated the space measuring function. In addition, a road test was conducted to validate the algorithm's stability and efficiency. Traffic Cone Recognition Test. Twenty red traffic cones, fourteen blue cones, and sixteen yellow cones were manually placed in front of the experiment car. As shown in Figure 6, recognized traffic cones were marked by rectangles with the same colors as the bodies of the cones, whereas the unrecognized ones were marked with white rectangles. e blue and yellow traffic cones reached a 100% detection success rate, while the red ones were accurately detected 85% of the time. e three undetected red traffic cones were located close to the left and right edges of the image and placed on a section of the playground that was reddish in color. Also, one of them was 10 meters away from the camera, and two were over twenty meters away from the camera. e ground color might have influenced red color recognition. Distance Matching Test. After the traffic cone marking process, the distance data matching test was conducted, and the experiment results are shown in Figure 7. Fourteen blue and sixteen yellow traffic cones were matched with the corresponding distance data from the depth image with a 100% accuracy rate. However, only 15 out of 20 red traffic cones had the corresponding distance data in the pixel area of the depth image. Besides the three undetected red traffic cones in recognition test, another two red ones on the left side, which were close to a blue pole, were mismatched in color and depth. e overlay might be the reason for this error. Consequently, 45 out of 50 traffic cones were successfully paired with their distance information, and the overall success rate was 90%. A prediction error existed for the paired traffic cones from 2 cm to 1.1 m between predicted distance and manual measured distance, and this error went up when the distance between the camera and the cone increased. is error was within 6%, and it was acceptable while the experiment car ran at a speed of 10 km/h. Road Test. To simulate a temporary road, the traffic cones with red color were designated as the left road boundary and the blue ones were designated as the right road boundary. e yellow traffic cones were used to indicate the start and end of the temporary road. e distance between any two traffic cones of the same color was 5 m, and the width of the temporary road as marked by the red and blue cones was 3 m. e temporary road included a curve-Advances in Civil Engineering 5 line section and a straight-line section, and the road test images are shown in Figure 8. e experiment demonstrated that a machine vision system could detect red, blue, and yellow traffic cones, and the experiment car in an autopilot mode could successfully navigate a temporary road at a speed of 10 km/h. Without the similar color influence, the success rate of recognition increased. At times, one or two traffic cones were missing from a frame of color and depth image, and this might be explained by the following. First, some cones that were near the left and right edges of the images could not be paired in color and depth, and the same happened in the initial static test. Since the distance between the car and the traffic cones near the edge of the image was quite long, the error would not impact driving safety. Besides, 12 frames of color and depth images were captured in one second, so the missing cones could be detected in the following frames while they moved away from the image boundary area. Second, traffic cones that were entering or leaving the images while the experiment car was moving might not have been detected if they showed up only partially. Once these traffic cones fully entered the images, this problem was solved automatically. Conclusion An image processing algorithm based on color and depth images was successfully applied to traffic cone detection. Each image frame was analyzed within 80 ms, which included one color and one depth image capture and processing. e traffic cones were very accurately recognized by color, with the success rates of color recognition being 85%, 100%, and 100% for red, blue, and yellow cones, respectively. Additionally, the distance was successfully sensed for 90% of the traffic cones by pairing color and depth images. Some of the cones were missing in some of the image frames when they were located around the image edge area, but they could be found in the following frames of the dynamic test. With 12 frames per second in the machine vision system, cones at the edges of the area naturally came in and out of the field of vision of the moving camera. is method was very effective on a temporary road marked by traffic cones of different colors. e advantages of using paired color and depth images for traffic cone detection can be summarized as follows. (1) is method is sensitive to small safety-related traffic cones. (2) It uses a highly efficient and stable algorithm for recognition processing. (3) It is a cost-effective solution for maintaining safe driving on temporary roads. Data Availability All data presented and analyzed in the study were obtained from laboratory tests at Beijing Information Science & Technology University in Beijing, China. All laboratory testing data are presented in the figures and tables in the article. We will be very pleased to share all our raw data. If needed, please contact us via e-mail: suqinghua1985@ qq.com. Conflicts of Interest e authors declare that they have no conflicts of interest.
4,592
2020-06-02T00:00:00.000
[ "Computer Science" ]
Timing of migration and African non-breeding grounds of geolocator-tracked European Pied Flycatchers: a multi-population assessment Using light-level geolocators, eight European Pied Flycatchers (Ficedula hypoleuca) from two breeding sites in Czechia were tracked. We also gathered all available geolocator tracks on 76 individuals from four European populations and compared the timing of annual cycle events and the African non-breeding sites among all populations. Individuals from both Czech breeding sites had overlapping migration events and non-breeding locations. Four individuals resided in the southwestern edge of Mali, two in Burkina Faso, one in Guinea, and the easternmost one in the Ivory Coast. On average, the birds left the Czech breeding grounds on 8 August and took between one to three stopovers during autumn migration. Birds crossed the Sahara on its western edge on average on 13 September. The mean arrival to the African non-breeding grounds was 47.5 days after departure on 2 October (range 10 September to 10 October). One bird showed intra-tropical movement within West Africa when after a 60-day residency it moved approximately 3° westwards. Estimated locations at the African non-breeding grounds overlapped among tracked birds from five European breeding sites. However, statistically, we could detect longitudinal segregation in two clusters. Birds from the British and Finnish breeding populations shared non-breeding grounds and were located in Africa west of the second cluster of the birds from the Czech and Dutch breeding populations. We show considerable population-specific differences in the timing of annual cycle events. Birds from Dutch breeding sites were the first in all three phases—departure from breeding sites, Sahara crossing and arrival to African non-breeding grounds, followed by the British, Czech, and Finnish birds, respectively. All tracked flycatchers so far fill only the western part of the African non-breeding range. For a complete understanding of the migration pattern in the species, we highlight the need for tracking studies from the eastern part of the range. Introduction Since the beginning of the twenty-first century, advances in bird-tracking devices have unprecedentedly improved our knowledge of the migration ecology of individual small-bodied songbirds. Retrieval of devices from tracked individuals is often labour-intensive, costly and challenging, which results in many studies restricting the fieldwork to single sites and small tracking sample sizes. Spatial replication is, however, critical for a meaningful understanding of the migration ecology of any species. The best approach is to have multi-population studies across a species' range. In recent years an accumulation in the number of studies has shown the power of multi-population assessments in Great Reed Warblers Acrocephalus arundinaceus (Koleček et al. 2016), Red-backed Shrikes Lanius collurio (Pedersen et al. 2020), Common Rosefinches Carpodacus erythrinus (Lisovski et al. 2021), Northern Wheatears Oenanthe oenanthe (Meier et al. 2022), and non-Passerines (e.g. Finch et al. 2015;Åkesson et al. 2020;Hahn et al. 2020). Such assessments allow for a deeper understanding of migratory corridors and spatiotemporal organization of distant populations across the year. To add to this list of multi-population assessments, we tracked the European Pied Flycatcher with lightlevel geolocators at two breeding sites in Czechia. We aim to provide detailed data on the migration patterns of birds from these Czech sites. In addition, we take the opportunity to summarize the current knowledge of the species' migration patterns based on published tracking results from four other populations available to date (Ouwehand et al. 2016;Ouwehand and Both 2017;Bell et al. 2022). For all birds sampled across European sites we aim to provide an overview of population-specific nonbreeding grounds and timings of annual cycle events. We intend to assess whether there is a role of breeding locations in Europe for the clustering of individuals at African residency areas and whether the timing of annual cycle events is linked to the breeding origin of populations, i.e. northern breeding populations migrate later than the southern ones at all stages of the annual cycle (Briedis et al. 2016;Gow et al. 2019). Study sites and geolocators in Czechia We studied the migration of European Pied Flycatchers at two breeding sites in Czechia. The first study site was in Northern Bohemia (50.62 N, 15.83 E); the second site was in North-eastern Moravia (49.95 N,17.25E). The great circle distance (the shortest distance measured along the surface of a sphere) between the two sites is 170 km. The first site shows stable numbers of breeding birds in a nest box population. At the second site, there is a steady decline of 1 3 breeding birds and in some plots with nest boxes, the population underwent local extinction during recent decades. We deployed light-level geolocators (model GDL2 with 7 mm light stalk, Swiss Ornithological Institute) on adult breeding birds across three different field seasons: 2012, 2014, and 2015. In each season, we deployed 20, 36, and 38 geolocators, respectively (Online Supplement 1, Table S2). All birds (42 males and 52 females) were trapped while they were feeding nestlings (at the age of 6-11 days) in nest boxes. We attached the geolocators on the birds' backs using leg-loop harnesses made of 1 mm thick silicone. Each device, including the harness, weighed approximately 0.6 g (< 5% of the bird's body mass). In the years following the deployment, we recovered one, three, and five geolocators, respectively. Due to technical difficulties, one logger failed recording, and seven geolocators from 2014 and 2015 contained data only for autumn migration and parts of the wintering period. However, we could identify the African non-breeding residency sites of all seven birds. The remaining geolocator from 2012 stopped recording on 4 May 2013 shortly after the bird had returned to the breeding area. The overall return rate of logger-tagged birds was 11.5% (6/52) for females and 7.1% (3/42) for males. At both sites, regular control of nest box occupancy by flycatchers and overall nest success was performed but due to a lack of manpower and funding no regular recaptures of nesting birds were done. The second site in Moravia had a small population size which also contributed to the lack of a control group. Thus, we lack a formal control group of ringed-only birds for the two sites. The only relevant data on returns of control birds are from a nearby (13 km to the site in Moravia) long-term study site in Dlouhá Loučka (49.83 N 17.21E). At that site, the return rate during 2005-2019 was 12% for females (3/25) and 13.6% for males (3/22; P. Adamík unpubl. data). There was no significant difference between the overall recapture rates of logger-tagged (9/94) and the above-mentioned untagged birds (6/47; χ 2 test, χ 2 = 0.27, P = 0.604). Geolocator data analyses We used the threshold method (Lisovski and Hahn 2012) to determine the sunrise and sunset times of the recorded light data using 'GeoLocator' software (Swiss Ornithological Institute) and setting the light level threshold to 1 unit on an arbitrary scale (i.e., minimum detectable ambient light by the given light sensor). All further analyses were conducted using the R-package 'GeoLight' v 2.0.0 following the standard procedures (Lisovski and Hahn 2012;Lisovski et al. 2020). Using the 'loessFilter' function, in each dataset we first filtered for outlying twilight events that exceed two interquartile ranges (k = 2) of the residuals from a local polynomial regression. We determined the stationary periods with the 'changeLight' function by setting the minimal stationary period to 2.5 days and the probability of change to q = 0.9. When calculating geographic positions for the stationary periods, we excluded 7 days on either side of the equinox times and later filtered all positions north from 80°N and south from 20°S (more than 30° latitude from breeding and median African non-breeding site latitudes). We estimated the geographic positions of the stationary periods using sunelevation angles derived from Hill-Ekstrom calibration, but when it was not possible, we used in-habitat calibration from the pre-migratory period (Lisovski et al. 2020, Online Supplement 1, Table S3). However, neither of the two methods worked for three of our datasets. For these three datasets, we developed and used a new calibration method-'equinox calibration'. This calibration method calculates the appropriate sun-elevation angle for the specified number of days around the equinox time when the day and night length at any given geographic location is just about 12 h long. Thus, any deviation from the 12-h day/night length in the geolocators' recordings reflect the measurement error due to the sensitivity limits of the light sensor or shading by vegetation, weather, etc. The calibration method finds the appropriate sun-elevation angle that would give the desired 12-h day/ night length. R-script for this calibration method is provided in Zenodo (Adamík et al. 2023). Due to technical differences in the sensitivity of the geolocators' light sensors between devices used in different study years, the estimated sun-elevation angles ranged widely between − 2.15 for the newer generation devices, and + 11.39 for the one from the oldest generation device with lower sensitivity light sensor used in 2012. Raw geolocator files from the eight Czech birds are freely available in the Zenodo data repository (Adamík et al. 2023). We also determined the timing of Sahara crossings for all individuals by manually inspecting the daily light patterns recorded by the geolocators. In short, when crossing large ecological barriers like seas and deserts typical nocturnal migrants, including the European Pied Flycatcher, regularly prolong their flights into the day or may fly non-stop Jiguet et al. 2019). Such behaviour is reflected in the geolocator's light recordings as lengthy periods of uninterrupted maximal light intensities when the light sensor is exposed to the sun as the bird flies (full light pattern, hereafter FLP or Sahara crossing). Due to difficulties in reliable estimates of stopover locations close to equinox periods, data on stopovers are presented only as timings and median longitudinal estimates. We estimated migration speed as migration distance divided by duration (including stopovers). Distances between the breeding and African non-breeding sites were estimated as a great circle distance. Migration duration is the time (in days) between departure from the breeding site and arrival to the African non-breeding site (duration.migration). Multipopulation assessment We collated published data on individually tracked European Pied Flycatchers from European breeding sites. To date, there are available data on geolocator-tracked birds from the UK, the Netherlands, Finland, and Norway (Ouwehand et al. 2016;Ouwehand and Both 2017;Bell et al. 2022). From these studies, we extracted data on departure from breeding sites (variable names in parentheses: autumn.departure), timing of Sahara crossing (inferred from light anomalies, FLP), arrivals to African non-breeding grounds (winter. arrival), median nonbreeding location estimates (wint.longitude, wint.latitude) and egg-laying dates (laying.date). We took the dates of the Sahara crossing for the four Finnish birds from Adamík et al. (2016). The full-collated dataset for 76 individuals is available as an Online Supplement 2, Table S1. All variables related to dates are expressed as days of the year. To assess whether the five European populations differ in African non-breeding site locations or duration of migration, we ran three linear models (LM) with a country as an explanatory variable (five countries) and non-breeding site longitudinal (winter.longitude) or latitudinal location (winter.latitude) and duration of migration (in days) as response variables. In further three LMs, which always had a single predictor, we explored whether non-breeding longitudes (response variable) can be explained by egg-laying dates, departures from breeding sites, and arrivals to Africa. In further analyses, we used linear mixed-effects models (LMM) to assess whether latitudinal or longitudinal location estimates in Africa (response variables winter.longitude or winter.latitude) are associated with breeding site longitudes or latitudes (fixed predictors: breeding.longitude, breeding. latitude) while accounting for the fact that multiple individuals originate from the same study site. For this reason, we entered the breeding population (country) as a random effect. For evaluating the strength of relationships between the four consecutive phases of the annual cycle (egg-laying date, departure from breeding site, Sahara crossing, arrival to African non-breeding sites) we fitted LMMs which had always a single fixed predictor and country as a random effect. For clarity, the model syntax is provided with the test statistics in the results. For model fitting we used the R package lme4 (Bates et al. 2015). For model diagnostics, we used the R package performance (Lüdecke et al. 2021). The models were run for the full dataset of 76 individuals, however, the sample size was 66 for Sahara crossing, 74 for arrival to non-breeding grounds, 47 for location estimates of nonbreeding grounds and 41 for egg-laying dates. Migration of birds from Czech breeding grounds On average, Czech flycatchers departed from their breeding grounds on 8 August (range 24 July to 22 August, Table 1). All flycatchers headed SW towards the Iberian Peninsula ( Fig. 1 and Online Supplement 1, Fig. S1). We detected between one to three stopover sites per bird. Stopovers before the Sahara crossing lasted on average 9.7 days (n = 13; range 3-24.5 days) and were located around 4.7°W (range 2.3°E to 9.5°W). Stopovers after the Sahara crossing were slightly shorter, on average 6.8 days (n = 6; range 3.5-11.5 days; Online Supplement 1, Table S4) and were further west at around 12.2°W (range 9.6-14.1° W). On average, birds crossed the Sahara on its western edge on 13 September (range 30 August-30 September). Mean arrival to the African non-breeding grounds was on 2 October (range 10 September to 10 October). One bird showed intra-tropical movement when it arrived at its first African non-breeding site on 11 September where it stayed for 60 days, after which it moved about 3° westwards to its final residency site. Autumn migration lasted on average 47.5 days (range 34-62 days) including stopovers. The African non-breeding residency sites overlapped for the two tracked Czech populations and for both sexes (t-test on longitudes: t = − 0.09, P = 0.928, df = 6; t-test on latitudes: t = 0.53, P = 0.612, df = 6; Fig. 1). Most birds were clustered around the south-western edge of Mali (four individuals), two in Burkina Faso, one in Guinea, and the easternmost one in the Ivory Coast. As birds from both Czech breeding sites showed considerable overlap in both non-breeding locations and their migration phenology, we pooled the data on them for the pan-European comparison of populations. For the one bird with data available up until spring, the departure from the non-breeding site was after 204 days of residency on 20 April. The bird initiated a crossing of the Sahara on 23 April and made a 12-day stopover around 6.7° E after the desert crossing. Migration timing of European breeding populations Most birds (54 out of 76) left the breeding sites by the end of the first week of August (range July 15-August 28, Fig. 3). Birds from the Dutch and UK breeding populations were similar in departure timing, with mean departure dates of 1 August and 4 August, respectively. The Czech birds left around 8 August and the Finnish birds departed on average 16 days later. The single Norwegian bird left the breeding site on 16 August. A similar order was found for the Sahara crossing timing, but here the populations differed in the interval between departure from the breeding sites and the Sahara crossing. The shortest interval was in Dutch birds (18 days) and the longest was in the Finnish and Czech birds (36-37 days). Population-specific arrivals to African non-breeding grounds were again in the same order as breeding site departures. Interestingly, the Finish birds had a very short interval (8 days) between Sahara crossing and arrival to non-breeding locations (mean intervals in other populations were in a range of 17-21 days). The Norwegian bird arrived late to the African non-breeding grounds (14 October vs mean for all birds 16 September). Autumn migration ranged from 17 to 85 days, and on average, it took 41.3 days to reach the African non-breeding sites. There was a significant effect of breeding population on the duration of migration (LM: duration.migration ~ country, F 4,69 = 4.71, P = 0.002, R 2 = 0.21) but this was likely due to the unusually short migration time of the Dutch birds (mean 35.3 days) while birds from other populations had similar durations (except for the one Norwegian bird with migration of 59 days). Birds that left their breeding sites late tended to have shorter migration durations (LMM: duration.migration ~ autumn.departure + (1 | country), b = − 3.31 ± 0.15, t = − 2.2, P = 0.033, marginal R 2 = 0.05, conditional R 2 = 0.36, n = 74). Discussion In this study, we brought new European Pied Flycatcher autumn migration data from two subpopulations in a central European region. We did not find any substantial differences in migration schedules and locations at the African nonbreeding grounds for these birds. But we should be cautious as the sample size was small and we did not have access to full-year tracking data. Interestingly, from atlas mapping we see that the two subpopulations show regionally contrasting population trajectories, the one in Bohemia being stable and the second in Moravia declining (Šťastný et al. 2021). It would be valuable to know where and at which time of year the main drivers of population dynamics act in these two regional populations. For Dutch birds, there is evidence for mechanisms at breeding sites (Both et al. 2006), while across the UK the trends in populations are driven by changes in survival and immigration which probably act outside the breeding season (Nater et al. 2023). The fact that we found overlaps in non-breeding locations of our sample of birds does not necessarily mean that they cannot differ in habitat use at a finer scale, which is below the resolution of geolocation by light. We were able to detect between one to three stopovers during the autumn migration, usually one or two stops before the Sahara crossing and one after it. Stops before the Sahara crossing were slightly longer, nearly 10 days, while those after it lasted on average almost 7 days. Pied Flycatchers tracked from the southwest UK usually have two stops during the autumn migration, one before and one after the Sahara crossing (Bell et al. 2022). Interestingly, for the UKtracked birds' stopover durations were slightly longer after the barrier crossing. Longitudinal estimates of stopovers after the Sahara crossing were in a similar range between the two populations (CZ: 9.6-14.1° W vs UK: 9.8-16.2° W), albeit birds from the Czech breeding population stopped on average 1.1° further east of the UK birds. The increased number of stopovers detected in our sample of birds are likely a consequence of the longer migration distances faced by the Central European birds. This fits with the findings of Fourcade et al. (2022) who estimated stopover durations at a fuelling site in south-western France near the Atlantic coast. At their site the body masses were lower than those from study sites located further South on the Iberian Peninsula (Bibby and Green 1980;Goffin et al. 2020), indicating Fig. 4 Relationships between annual cycle events in the European Pied Flycatcher. The timing of events is expressed as days of the year that one additional stopover was needed for fuelling before the barrier crossing. Only a few individuals at the French stopover site had sufficient fuel loads to be able to cross the Sahara without additional refuelling (Fourcade et al. 2022). Multi-population assessment By collating available geolocator tracks from 76 individuals from five populations in Europe, we provide a comprehensive overview of the locations of African non-breeding residency sites and the timing of autumn migration for individual European Pied Flycatchers. Except for the single Norwegian individual, birds from the other four European breeding populations showed considerable overlap in location estimates at the African non-breeding grounds. Statistically, we could detect longitudinal segregation in two clusters: birds from the UK and Finnish populations overlapped and were west of the second cluster of birds from the Czech and Dutch populations. There was no evidence of any latitudinal segregation of the populations. However, one has to be careful with the interpretation of latitudinal estimates inferred from light-level geolocators. By default, they have considerable uncertainty, while there are also issues with different calibration approaches, and whether birds from different populations use similar habitats Lisovski et al. 2018). We failed to find clear support for the role of European breeding locations in the clustering of subsequent African residency areas. Only breeding latitude was very weakly associated with non-breeding site longitude estimates. However, the effect was weaker than in the first study on European Pied Flycatcher tracking by Ouwehand et al. (2016). This could be purely a consequence of the sampling effect. Finch et al. (2017) found strong support for a positive link between breeding and non-breeding longitudes but no link between latitudes in several populations of European Rollers (Coracias garrulus). In Common Swifts (Apus Fig. 5 Relationships between non-breeding site longitudinal estimates and laying dates, arrivals to non-breeding grounds and departures from breeding sites 1 3 apus) tracked across several European populations, breeding latitudes were positively correlated with non-breeding latitudes, clear evidence for a chain migration pattern (Åkesson et al. 2020). The fact that we failed to find strong support for links in the European Pied Flycatcher might simply reflect the scale of the contemporary study. This is a critical issue in any study on migratory connectivity. In an ideal situation, birds would have to be sampled across the entire species´ breeding range. Migration timing was considerably different among populations. Dutch birds were first in all three phasesdeparture from breeding sites, Sahara crossing and arrival to African non-breeding grounds-followed by the UK, Czech and Finnish birds, respectively. Interestingly, of all populations, the Finnish birds had the shortest interval of only eight days between the Sahara crossing and arrival to non-breeding grounds. This could indicate that they undertook considerable refuelling prior to the barrier crossing, performing a long endurance flight with arrival close to the African residency sites. The birds could skip refuelling after desert crossing or their stops were very short, below the resolution set for stationary periods in the GeoLight package (given the data quality for detecting short stopovers). Autumn migration ranged from 17 to 85 days and it was similar for Czech, UK and Finnish birds. In contrast, the Dutch birds had the shortest migration of only about 35 days. We also found a negative relationship between breeding site departures and duration of migration, i.e. the later a bird departed the shorter time it was en route. This is a similar pattern to Collared Flycatcher (Ficedula albicollis) in which later departing individuals migrated at faster speeds towards African residency sites (Briedis et al. 2018a). Very likely, late individuals are trying to catch up with the early ones. Whether such behaviour is innate or the birds adjust it according to seasonal changes in available food resources is unknown. Another interesting finding was that birds residing further west in Africa arrived there later. The effect was much stronger than in the previous study by Ouwehand et al. (2016). In contrast to Ouwehand et al. (2016) we did not confirm the relationship between breeding site departures and non-breeding longitudes. The difference between these two studies is likely attributable to the sampling effect. Interestingly, we did not find a significant effect of the timing of breeding on the subsequent phases of the annual cycle, even though the phases were positively correlated with each other, a pattern regularly found in other songbirds (e.g. Mitchell et al. 2012;van Wijk et al. 2017;Gow et al. 2019). As we clearly see large differences in the timings of events in the studied populations, we would expect a strong effect of seasonality. However, egg-laying dates were available for only 41 of the 76 tracked individuals. A similar lack of effect on the timing of breeding for subsequent annual cycle events was found in the Collared Flycatcher (Briedis et al. 2018b). Thus, there must be other factors than just egg-laying that explain the timing of subsequent events. No doubt, there must be a significant role of the photoperiod at the breeding sites that sets the pace for the timing of the circannual rhythms (Gwinner 1996). Briedis et al. (2020) found a strong effect of seasonality shaping the timing of avian annual cycles. Thus, further exploration with a larger number of study sites or experimental translocations would be desirable to explore the role of, for example, site-specific phenology and photoperiod in explaining the variability in the timing of annual cycle events among populations. In our study, the overall return rate of tagged birds was 9.6%, with males having slightly lower return rates. Unfortunately, for various reasons, we did not have control groups of ringed-only birds at both sites. Our only available data are from a nearby study site with an overall return rate of 12.8%. We know that the true return rate on logger-tagged birds must have been higher, but our study coincided with two seasons of cold and rainy weather at the time of nestlings. In addition, we experienced very high nest mortality due to dormice and marten predation (Adamík and Král 2008). As a result, we often could not catch the adult breeders and control them for geolocators. No matter of this, we have to admit that for these two particular sites we cannot be sure about the tagging effect on return rates. The available published studies report no general tagging effect on return rates (Brlík et al. 2020), andBell et al. (2017) report no negative effects in British flycatchers. In the Dutch flycatcher population, there was no overall tagging effect, but the type of harness did affect return rates (Ouwehand and Both 2017). Return rates of logger-tracked birds in other populations used in our comparative study were in the range of 4-42% and of the control group in a range of 10-56% (Ouwehand et al. 2016). We think that in our case the loggers did not affect between-population differences in migratory behaviour and the differences found across populations are not related to tags per se. However, this is beyond the scope of our study, and we still know very little for how tagging impacts on behaviour of migratory birds across the years. With our multi-population assessment, we try to fill a gap by comparing the timing of migration of Western European populations of European Pied Flycatcher. In addition, we show that there is considerable mixing at the African nonbreeding grounds of birds from various breeding origins in the western Palaearctic. Across species, various breeding populations seem to frequently mix at African non-breeding sites (Finch et al. 2017). This might have interesting consequences for population dynamics at the breeding grounds as depending on the scale of factors operating at the non-breeding sites they might have (de)synchronising effects. However, with the available tracking studies we could cover only a small part of the populations from the extensive breeding range of the species. This comparative study only covered western and central Europe and part of Fennoscandia. The species´ breeding range extends far into the East (up to 90°E) and there is no tracking study from the European part of Russia and further east to western Siberia. The few ringing recoveries available from the eastern part of the range suggest the general heading during migration is towards the Iberian Peninsula (Spina et al. 2022). Thus, in autumn even birds from the Asian breeding populations pass via Iberia and the western fringe of the Sahara (Chernetsov et al. 2008). From a conservation perspective, this means that the entire population passes through a particular region. Any changes in such bottleneck, that could affect stopover behaviour, for example via fuelling rates, might represent a critical point for different populations (Runge et al. 2014). While the African non-breeding range stretches all the way east to the Central African Republic and the northeastern part of DR Congo, there is not a single recovery connecting these areas to the breeding grounds (Spina et al. 2022). Similarly, all tracked flycatchers so far fill only the western part of the African non-breeding range. We suggest that nonbreeding populations further east in central Africa originate from breeding sites at the eastern part of the breeding range. As such, further studies from the eastern part of the breeding range are needed, not only for European Pied Flycatchers but also for a wide range of other species. Furthermore, to get a thorough picture of the patterns of migratory connectivity, we need tracking studies from the African non-breeding sites to find the breeding origins of wintering birds (cf. Blackburn et al. 2017). This may be particularly valuable for the European Pied Flycatcher-a model species for climate change research.
6,644.4
2023-06-23T00:00:00.000
[ "Environmental Science", "Biology" ]
Reconnection electric field estimates and dynamics of high-latitude boundaries during a substorm The dynamics of the polar cap and the auroral oval are examined in the evening sector during a substorm period on 25 November 2000 by using measurements of the EISCAT incoherent scatter radars, the north-south chain of the MIRACLE magnetometer network, and the Polar UV Imager. The location of the polar cap boundary (PCB) is estimated from electron temperature measurements by the mainland low-elevation EISCAT VHF radar and the 42 m antenna of the EISCAT Svalbard radar. A comparison to the poleward auroral emission (PAE) boundary by the Polar UV Imager shows that in this event the PAE boundary is typically located 0.7 of magnetic latitude poleward of the PCB by EISCAT. The convection reversal boundary (CRB) is determined from the 2-D plasma drift velocity extracted from the dual-beam VHF data. The CRB is located 0.5–1 ◦ equatorward of the PCB indicating the existence of viscous-driven antisunward convection on closed field lines. East-west equivalent electrojets are calculated from the MIRACLE magnetometer data by the 1-D upward continuation method. In the substorm growth phase, electrojets together with the polar cap boundary move gradually equatorwards. During the substorm expansion phase, the Harang discontinuity (HD) region expands to the MLT sector of EISCAT. In the recovery phase the PCB follows the poleward edge of the westward electrojet. The local ionospheric reconnection electric field is calculated by using the measured plasma velocities in the vicinity of the polar cap boundary. During the substorm growth phase, values between 0 and 10 mV/m are found. During the late expansion and recovery phase, the reconnection electric Correspondence to: T. Pitkänen<EMAIL_ADDRESS>field has temporal variations with periods of 7–27 min and values from 0 to 40 mV/m. It is shown quantitatively, for the first time to our knowledge, that intensifications in the local reconnection electric field correlate with appearance of auroral poleward boundary intensifications (PBIs) in the same MLT sector. The results suggest that PBIs (typically 1.5 h MLT wide) are a consequence of temporarily enhanced longitudinally localized magnetic flux closure in the magnetotail. Introduction Magnetic reconnection on the dayside magnetopause and in the nightside magnetotail are the main factors controlling the solar wind energy transfer into the magnetosphere and ionosphere. As first suggested by Dungey (1961), during southward interplanetary magnetic field (IMF) conditions, the low latitude reconnection on the dayside magnetopause creates open field lines that are transported to the magnetotail by the solar wind flow. Subsequent closing of the field lines in the magnetotail reconnection, which can occur either at the distant neutral line (DNL) or during substorm conditions at the near-Earth neutral line (NENL), and the following sunward motion of these closed field lines complete the magnetospheric convection cycle. The magnetospheric convection maps into the ionosphere where plasma flows antisunward across the polar cap and returns to dayside at lower latitudes roughly within the dawn and dusk auroral oval. Depending on the balance between the dayside and nightside reconnection rates, the amount of open Published by Copernicus Publications on behalf of the European Geosciences Union. magnetic flux changes affecting the polar cap size. When the dayside reconnection rate exceeds the nightside rate, the polar cap expands as the open flux increases. In the opposite situation the polar cap contracts as the open flux decreases (Siscoe and Huang, 1985;Cowley and Lockwood, 1992). During southward IMF conditions, the two-cell convection pattern is established, and a non-zero B y component leads to an asymmetry in the convection cells (e.g. Cowley and Lockwood, 1992). For pure northward IMF, the dayside magnetic reconnection occurs poleward of the cusp, creating reverse convection cells as the pre-existing open flux is circulating from one side to the other in the tail lobe. However, with non-zero B y also closed field lines on the equatorward side of the cusps may reconnect (Tanaka, 1999, and references therein). According to Faraday's law, changes in the amount of open magnetic flux B threading the polar cap are related to the electromotive force, which can be separated into the dayside and nightside reconnection voltages (e.g. Siscoe and Huang, 1985;Milan et al., 2003;Milan, 2004) The reconnection voltages are the integrals of the day-and nightside reconnection electric fields along those portions of the polar cap, which map to the reconnection sites. In a stationary situation, where magnetic flux is opened the same amount in the dayside as it is closed in the nightside, the sum of these voltages is zero. In this paper, we study only the nightside reconnection electric field E r , which if known, can be integrated along the nightside polar cap boundary (PCB) to get the corresponding reconnection voltage Theoretical work by Vasyliunas (1984) showed that plasma flow across the polar cap boundary can be utilized to determine the reconnection electric field in the ionosphere and further in the magnetotail. The reconnection electric field along the polar cap boundary can be written as where v b is the polar cap boundary velocity (normal to the boundary), v p is the plasma flow velocity normal to the PCB and B is the magnetic field. Hence a duskward electric field in the magnetotail corresponds to magnetic flux closure (see e.g. Østgaard et al., 2005;Hubert et al., 2006). In the following, we use a sign convention where positive reconnection electric field means flux closure and velocities in the equatorward direction are positive. The first attempt to estimate the ionospheric reconnection electric field by using Eq. (3), was made by de la Beaujardiere et al. (1991). Sondrestrom incoherent scatter radar (ISR) in the midnight sector was used in a meridional scanning mode and the PCB was identified by using electron density contour levels caused by auroral precipitation. The orientation of the PCB was inferred from all-sky images. Blanchard et al. (1996Blanchard et al. ( , 1997 utilized the same method, but additionally Blanchard et al. (1996) used a technique based on 630.0 nm auroral emissions. Global optical imaging and ground-based radar measurements were combined to calculate the reconnection electric field in the nightside by Østgaard et al. (2005). They used the poleward edge of the auroral oval extracted from the IMAGE FUV wide band imaging (WIC) camera images together with the EISCAT ISR electron temperature measurements to identify the PCB and its orientation and velocity. Plasma velocity was obtained from EISCAT measurements. The ISR-based methods described above are restricted in local time (MLT) coverage. However, their spatial and temporal resolution in locating the PCB is typically better than in other methods. Another approach to estimate the reconnection rate is to apply Eq. (1) to get the reconnection voltages. Milan et al. (2003) inferred the polar cap boundary at all local times from global optical images by the Polar UV Imager and by the spectral widths of the SuperDARN HF radars, by using low-altitude orbiting satellite particle data as guidance. The reconnection voltages were then calculated by using the change in the amount of open flux. Milan et al. checked the validity of the method by comparing the dayside reconnection voltage values to the estimates calculated by using Eqs. (2) and (3), and found a good agreement between the results of the two methods. Recently, Hubert et al. (2006) used global optical images by the IMAGE FUV SI12 instrument and plasma convection measurements of SuperDARN radars to get the reconnection voltages by using Eqs. (2) and (3). When the two-cell convection pattern prevails, the antisunward flow in the polar cap and the sunward return flow are separated by a velocity shear called the convection reversal boundary (CRB). Intuitively, the CRB represents the polar cap boundary, since the open polar cap field lines are convecting with the plasma across the polar cap from the dayside to the nightside and the flow with the reconnected closed field lines returns back to the dayside. However, there are evidence of small amount of antisunward convection on closed field lines, indicating the PCB location poleward of the CRB (Senior et al., 1994;Sotirelis et al., 2005, and references therein). As a consequence of magnetosphere-ionosphere coupling, the current systems which are embedded in the convection pattern and auroral zone involve both horizontal and fieldaligned currents (FACs). At the Harang discontinuity (HD) in the premidnight sector, the dominating eastward electrojet changes to the westward electrojet (e.g. Koskinen and Pulkkinen, 1995). The electrojets (and associated FACs) produce the magnetic convection reversal boundary (MCRB). In this paper, we study some specific aspects of the reconnection process that can be related to high-latitude T. Pitkänen et al.: Reconnection electric field estimates and dynamics of high-latitude boundaries 2159 boundaries, by using EISCAT incoherent scatter radar measurements. We estimate the ionospheric reconnection electric field in the evening sector during a substorm on 25 November 2000, by applying the method introduced by Vasyliunas (1984). Plasma velocity, and location, orientation, and velocity of the polar cap boundary are determined from EISCAT measurements. The EISCAT PCB is compared with Polar UVI images. In addition, the convection reversal boundary is extracted from the EISCAT data and these boundaries are then studied in the framework of the electrojets, including the Harang discontinuity region. Finally, the reconnection electric field estimates are calculated. Ground-based measurements Throughout this study, we use the aacgm (altitude adjusted corrected geomagnetic) coordinate system, in which any two points connected by a magnetic field line have the same coordinates (Baker and Wing, 1989). The VHF data were obtained with a latitudinal coverage of 70.3-78.2 • for the VHFa in 15 range gates, and 70.1-78.5 • for the VHFb in 16 range gates. The latitudinal resolution decreased polewards from 0.4 to 0.8 • and from 0.3 to 0.6 • for the VHFa and VHFb, respectively. Both radar beams covered the altitude range of 233-1032 km with gate height separation increasing from 23 to 83 km in the poleward direction. The ESR 42 m field-aligned data were measured from the altitude range of about 90-880 km with the height resolution decreasing from the lowest E-region values of 3 km up to 37 km high in the F-region. The EISCAT measurements covered the time interval 17:00-22:00 UT on 25 November 2000, and yield the four basic ionospheric parameters: electron density N e , electron and ion temperature T e and T i , and the line-of-sight (l-o-s) ion velocity V i . The magnetic midnight at Tromsø is at about 21:30 UT. The north-south chain of the MIRACLE magnetometer network ( Fig. 1) was used to estimate the ionospheric currents. East-west equivalent electrojets were calculated from magnetic X component data by using the 1-D upward continuation method by Vanhamäki et al. (2003). Polar cap boundary, plasma velocity and reconnection electric field The polar cap boundary location was estimated by using the method introduced by Aikio et al. (2006) and used also in Aikio et al. (2008). The method uses electron temperature T e measurements from a low-elevation EISCAT VHF radar and the field-aligned ESR 42 m antenna. The T e in the nightside F-region can be enhanced within the auroral oval due to collisional heating of particle precipitation. When the PCB is situated between the mainland and Svalbard, the field-aligned ESR 42 m provides a T e height profile in the polar cap. The low-elevation radar, in this case the dual beam VHF, measures a T e profile which is affected by both latitudinal and altitude variations in the temperature. By subtracting the polar cap T e height profile from the low-elevation T e profile, a T e latitude profile is obtained, in which the polewardmost latitude where T e is positive is taken as the PCB (see Aikio et al., 2006, for details). The VHF data were first integrated to 60 s and the ESR 42 m measurements to 128 s. During the studied time interval, the electron temperature at Svalbard did not show any significant variations, which made a longer integration period possible. Consequently, to reduce variance in the reference polar cap height profile, the ESR 42 m data were averaged to 1 h time resolution. The PCB was determined for the two beams of the VHF separately, which allowed estimation of the PCB orientation. The 2-dimensional E×B plasma drift velocity was calculated from the l-o-s ion velocities of the two VHF radar beams. When doing this, it must be assumed that there are no longitudinal variations in velocity between the beams and that the field-aligned ion velocity is zero. The longitudinal separation of the two radar beams increased from 97 to 195 km with latitude. The 2-D velocity vectors were placed in the middle of the gate pairs. For calculating the magnetic field at different latitudes, the IGRF/DGRF-model was used (http://modelweb.gsfc.nasa.gov/models/cgm/cgm.html). The reconnection electric field was calculated by Eq. (3). A schematic presentation of the situation is shown in Fig. 2. The 2-D plasma velocity as interpolated at the PCB latitude and the components of the plasma velocity (v p ) and the PCB velocity (v b ) along the PCB normal were calculated. For the estimate of the polar cap boundary velocity, 5-min running means at the 1-min resolution of the PCB location were first calculated. These data were further interpolated to 30-s resolution to obtain the PCB velocity estimates at the same time instants as the plasma velocity measurements. It was assumed that the PCB maintains its orientation as it moves in latitude along the magnetic meridian during the one-minute interval, though this condition may not always be true as pointed out by Østgaard et al. (2005). Comparison of EISCAT and Polar UVI data For the studied interval 17:00-22:00 UT, on 25 November global images of the northern auroral oval were provided by the UV Imager of the Polar satellite (Torr et al., 1995). The UV Imager was taking images with an integration time of 37 s using the LBHl filter (Lyman-Birge-Hopfield long, 160-180 nm). The emission luminosity in the LBHl wavelength band is practically directly proportional to the energy flux of the precipitating electrons. An emission altitude of 120 km is assumed for the LBHl emissions. For the images, only the line of sight correction has been done which takes into account the increased emission when looking through a longer path length of the atmosphere at large angles away from nadir. The dayglow has not been removed since it is northern winter. The images were collected continuously, excluding short periods at 18:18-18:22 UT, 19:04-19:09 UT and 20:21-20:25 UT, when the instrument was taking background images. Since the data had a time resolution of 37 s, one complete minute time interval could include two or three successive images. Therefore, to compare the UVI data with the EISCAT data, an image sequence of one frame per minute was generated by selecting the image overlapping most with the corresponding one-minute period to represent the time interval. To estimate the poleward emission boundary PAE, a 40min wide MLT sector containing both of the VHF radar beams was selected from each image. Then for each sector, a latitude profile of longitudinally averaged emission intensities were calculated with a resolution of half a degree in latitude. The latitude profile was interpolated further to a resolution of 0.1 • and the location of the PAE was determined by using a ratio value of 0.3 UV max , where UV max is the intensity maximum in the corresponding latitude profile. In those cases, a threshold value of 4.3 photons cm −2 s −1 was used. Both methods for locating the PAE boundary have been used earlier, in Baker et al. (2000) and Aikio et al. (2006). Baker et al. (2000) found for the Polar UVI images that the optimal ratio and threshold values were 0.3 UV max and 4.3 cm −2 s −1 , respectively, when comparing the PAE and the DMSP b5e boundaries. In addition, when comparing Viking UV with DMSP data, Kauristie et al. (1999) used the ratio value of 0.5 UV max (full-width half maximum). However, they stated that the value 1/e∼0.37 could be even better in cases not hampered by background scatter, which is close to the value 0.3 obtained by Baker et al. (2000). An example of an UVI image taken at 20:06:18 UT is presented in Fig. 3. The calculated PAE latitude (74.7 • ) is marked by a white arc connecting the MLT sector boundaries. The PCBs from the VHFa (73.7 • ), VHFb (73.6 • ) and their average (73.65 • ) are marked by red crosses. In this case the latitudinal difference between the average VHF PCB and PAE was 1.1 • . In total 138 VHF PCB and UVI PAE pairs could be extracted from the studied substorm time interval 17:00-22:00 UT. Only those point pairs were included for which both estimates were below 75 • cgmlat, which is the upper limit for the EISCAT T e -method. In addition, the amount of point pairs available was reduced by about an hour data gap in the VHF PCB data between 17:52-18:57 UT (5-min running means of the two data sets are visible in Fig. 8). The result is shown as a scatter plot in Fig. 4. Ideally, the points would lie in the y=x curve, but in this event, after making a fit of the form y=x+a, where a is a constant, the UVI PAE appeared to be located typically 0.69 • cgmlat poleward of the VHF PCB. The Pearson correlation coefficient r calculated for the scatter data set is r=0.42. We can account a few tenths of a degree of this discrepancy to originate from the method used in VHF PCB determination. By default, the PCB is determined as the polewardmost latitude where the T e -curve with errorbars stays above zero, giving somewhat lower latitude values than the curve without errorbars would give (see Aikio et al., 2006). The original latitudinal resolution of the VHF measurements causes at maximum 0.6-0.7 • uncertainty in the PCB location. Nominally, the field-of-view (f-o-v) resolution of the Polar UV Imager is approximately 0.04 • ×0.04 • (Torr et al., 1995) corresponding typically an ionospheric resolution to 0.3 • lat. However, the despun platform has wobble in one direction, which decreases the UVI resolution in that direction by a factor of 10, in the worst case (Parks et al., 1997). This is probably the most important factor contributing to the scatter, since the wobble was along the 10:00-22:00 MLT line, which is approximately perpendicular to the PAE boundary. In addition, a some contribution may arise from the spatial averaging of the UVI PAE intensities. In the case that the UVI PAE boundary is tilted away from the latitudinal direction in the selected MLT sector, the calculated latitude profile of longitudinally averaged emission intensities is broadened polewards. As a result the calculated UVI PAE boundary would be located somewhat poleward of the true PAE boundary at the EISCAT longitude. The plasma velocity vectors calculated from the EISCAT VHF data are presented in Fig. 6. The solid black curve is the 5-min running mean of the VHF polar cap boundary at 1-min resolution, and the dashed curve is the plasma convection reversal boundary (CRB) determined from the plasma velocity vectors. Note that only every third velocity measurement is plotted for the sake of clarity. The EISCAT measurements started at 17:00 UT and showed a typical evening cell convection pattern with the convection reversal boundary at 72 • cgmlat. The polar cap boundary was located about 0.5 • cgmlat poleward of the CRB. The polar cap boundary moved equatorwards, which is a signature of a substorm growth phase. The CRB followed closely the motion of the PCB. At 17:52 UT the PCB was abruptly displaced to the south and our method could not see it anymore, and a little bit later also the CRB disappeared from the field of view of the VHF radar. In Fig. 7 a selected set of UVI frames are presented to show the general evolution of the substorm. Most of the frames were selected for reasons that become clear in Sect. 3.5 and may not coincide with the time instants discussed here. Figure 7a was taken during the growth phase of the substorm. The time and location of the substorm onset was determined from the Polar UVI images. At about 18:07 UT the substorm expansion started around 22:40 MLT by a sudden brightening of aurora on a 40-min wide MLT sector in the equatorward oval (not shown). This was followed by a magnetic Pi2 pulsation burst at 18:08 UT, detected by the SAMNET Borok (Russia) mid-latitude magnetometer station (54.1 • cgmlat, 113.3 • cgmlon, data not shown). The of the order of 1-min delay between the auroral signature and the Pi2 burst is consistent with earlier observations of propagation-related delays (Liou et al., 2000). Here the Borok station was located about 1 h MLT west of the substorm onset region. Figure 7b shows the situation two minutes after the onset, when the substorm had expanded poleward and toward west and east. In about 30 min the substorm expansion had progressed so that intense auroral precipitation had intruded into the EISCAT local time from the east (Fig. 7c). At about 18:53 UT the convection reversal boundary reappeared from the south to the EISCAT f-o-v together with the PCB and they both moved polewards (Fig. 6a). During 18:57-19:06 and 19:22-19:31 UT the convection pattern was very dynamic probably violating the assumption of a uniform ion flow between the VHF radar beams. These periods are visible as gaps in the CRB. The beginning of the recovery phase at about 19:08 UT was determined from the start of the decrease in total integrated westward equivalent current of MIRACLE stations (data not shown). The PCB continued proceeding polewards until 19:23 UT ( Fig. 6a and 7f). This is consistent with the observations by Milan et al. (2003), who found that the polar cap area was decreasing due to the poleward contracting PCB in the recovery phase, even after the substorm associated auroral activity had faded away. During the slightly overlapping gaps in the CRB between 19:22-19:34 UT and in the PCB data between 19:31-19:35 UT, the boundaries had moved equatorwards about half a degree and 0.7 degree, respectively. The equatorward motion of the polar cap boundary turned poleward at 19:54 UT and lasted until 20:10 UT before turning equatorward again (Fig. 6b). During the studied time interval, the CRB followed the motion of the PCB and was located 0.5-1 • equatorward of the PCB. Boundaries and equivalent currents The equivalent currents with the boundaries are shown in Fig. 8. The red curve indicates the 5-min running average of the UVI PAE boundary. The solid and dashed black curves are the VHF PCB and the CRB, respectively. For the sake of clarity, the convection reversal boundary after the onset of the substorm expansion has been left out (see Fig. 6). The white line marks the magnetic convection reversal boundary (MCRB) determined from the equivalent currents. In the premidnight sector until 20:25 UT (∼23:00 MLT), a value of 0 mA/m was used for MCRB. After 20:25 UT, when the westward electrojet was dominating, the −50 mA/m value (corresponding to about −30 nT in the X component) followed the poleward boundary better. During the substorm growth phase before 18:07 UT, the VHF PCB, the CRB, and the UVI PAE boundary were clearly located poleward of the eastward electrojet region and the magnetic convection reversal boundary. The boundaries showed roughly similar equatorward motion in association with the gradually equatorward expanding evening sector eastward electrojet (EEJ) pattern. The degrees polewards at about 17:21 UT. Similar, though much weaker behaviour can be also seen in the VHF PCB. At the substorm onset at 18:07 UT, the poleward edge of the EEJ region, followed by the UVI PAE, jumped 2 • cgmlat equatorwards. The VHF could not see the PCB which was located equatorward of the radar f-o-v, together with the CRB (see also Fig. 6a). After 18:25 UT the EEJ pattern started to move polewards and a few minutes later the UVI PAE followed. The DMSP F15 satellite was crossing the EISCAT MLT from south to north at 18:28 UT and the particle data of the spacecraft showed a clear poleward particle boundary (b5e) at a latitude 69.6 • (black triangle in Fig. 8), just equatorward of the UVI PAE boundary (DMSP data not shown). The b5e boundary represents the poleward edge of the auroral oval as determined by an abrupt drop in the electron energy flux (Newell et al., 1996a,b). At 18:36 UT the eastward electrojet started to intensify and the MCRB stopped moving polewards. Two minutes later also the westward electrojet on the poleward side started to intensify. From the Polar UVI images it can be seen that the intensification of the electrojets was associated with the intense auroral activity intrusion from the east to the EIS-CAT/MIRACLE MLT sector (Fig. 7c). The UVI PAE boundary continued poleward motion together with the reappearing VHF PCB. After 18:53 UT the WEJ expanded abruptly by several degrees to lower latitudes. This was accompanied by fading away of the most intense precipitation at the EISCAT MLT at 19:01 UT (Fig. 7e). After the Polar UVI data gap 19:04-19:09 UT the UVI PAE boundary appeared at a very high latitude of about 76 • . However, the emissions were weak and the intensity latitude profile was flat making the determination of the PAE boundary very uncertain. In addition, the wobble of the Polar satellite may have stretched the auroral forms polewards. The VHF PCB remained below 74 • cgmlat. After about 19:13 UT, the eastward electrojet recovered, first at latitudes below 67 • , in association with the weakening of the WEJ. From ∼19:27 UT onwards the equivalent current pattern was again dominated by the eastward electrojet, although with a lower intensity level. The westward electrojet intensified for a short period at 20:05 UT, associated with a poleward excursion of the PCB. After 20:22 UT the WEJ appeared in the post-midnight sector and the UVI PAE and the VHF PCB together followed closely the poleward edge of the westward electrojet. Harang discontinuity After 18:38 UT an intense westward electrojet formed on the poleward side of the intensifying eastward electrojet. The WEJ expanded from the east, probably rotating the whole current pattern to an earlier MLT. By 19:00 UT the WEJ region had expanded equatorward down to latitudes 65 • cgmlat. During this time interval, between 18:38 and 19:00 UT, the MIRACLE magnetometer stations showed a change from positive to negative X, which is the original definition of the Harang discontinuity near magnetic midnight (Heppner, 1972). This MCRB is shown by a white dashed line in Fig. 8. Within the HD region, both of the electrojets had periodic, about 3-min fluctuations, whose effect can be seen in oscillatory motion of the MCRB. After 19:13 UT the MCRB moved poleward and the electrojets weakened, while the most intense auroral activity had already faded in this MLT sector. The dynamic convection pattern after 19:22 UT was associated with the recovery of the EEJ. Amm et al. (2000) distinguished two topologically different type of HD, "rotation-type" and "expansion-type", the former being associated with the Earth's rotation and observed during quiet and moderately active geomagnetic conditions without substorm activity, and the latter during geomagnetically disturbed periods i.e. during substorms, typically appearing in an earlier MLT sector. The Harang discontinuity period in this event represents the "expansion-type" HD. When the WEJ associated with the Harang discontinuity intruded from the east to the EISCAT MLT sector, the UVI PAE and the VHF PCB moved rapidly in the poleward direction. Still, a part of the WEJ was flowing poleward of the polar cap boundary proxies (Fig. 8). Only after about 19:10 UT almost all of the WEJ was located equatorward of these boundaries. Figure 8 indicates that, in the Harang discontinuity region, the poleward part of the westward equivalent electrojet was flowing poleward of the UVI PAE and the VHF polar cap boundaries. Equivalent currents are the part of the real threedimensional current system that are visible to the ground. The real east-west currents are likely to deviate to some extent from the equivalent east-west currents. The MIRACLE magnetometer Y components showed positive disturbances with quasi-periodic variations of a few minutes up to 71.5 • cgmlat between 18:30 and 19:00 UT (data not shown). At the same time, Polar UVI images showed structured precipitation within the auroral bulge. Hence, it is plausible that structured upward field-aligned currents were flowing from this region. In addition, there is a gap in latitudinal coverage of magnetometer stations between BJN (71.5 • cgmlat) and HOR (74.1 • cgmlat), which decreases the accuracy of the 1-D upward continuation method within the region of interest. Because of these uncertainties, more investigations within the expansion type HD should be made to verify the extension of the WEJ to the polar cap. Reconnection electric field The results of the calculated ionospheric reconnection electric field are presented in the three topmost panel in Fig. 9. The topmost panel shows the plasma drift velocity along the PCB normal, positive equatorwards. The second panel shows the same velocity component for the PCB motion. The third panel presents the calculated reconnection electric field E r . During the substorm growth phase, before 18:07 UT, during polar cap expansion, E r varied between 0-10 mV/m. The magnitudes are in agreement with earlies studies, e.g. de la Beaujardiere et al. (1991) found that E r is less than 15 mV/m when the polar cap expands and Blanchard et al. (1997) obtained values less than 10 mV/m before a substorm onset. Because the PCB was located equatorward of the radar measurement range, the reconnection electric field could not be calculated during the substorm onset and early expansion. In the late expansion and the recovery phases E r varied between 15 and 40 mV/m, which are of the same order of magnitude as found in earlier studies (de la Beaujardiere et al., 1991;Blanchard et al., 1997;Østgaard et al., 2005). The electric field showed also variations with periods of ∼7-27 min. These variations are interpreted as variable reconnection occuring in the magnetotail and have been reported in earlier studies (e.g. Østgaard et al., 2005;Aikio et al., 2008). Reconnection electric field and auroral emissions In this section, the variations in the reconnection electric field are compared to the optical auroral emissions by the Polar UVI instrument. For a comparison with the reconnection electric field, a weighted emission intensity average was calculated for UVI images corresponding to EISCAT data points (1 min resolution, bottom panel in Fig. 9). The calculation was made from a 80-min wide MLT sector including the EISCAT beams with latitude limits of 65 • and 80 • cgmlat. The emission intensities within the MLT sector were averaged in longitude by using 0.5 • -wide latitude bins. The final result was obtained by weighting these longitudinal averages with their area and calculating the average. The 40-min wide MLT sector was enlargened to 80-min wide sector due to that also possible auroral emission intensifications occuring close to the EIS-CAT but not exactly at the radar beams, would be included in the calculation. During the growth phase E r intensified around 17:25 UT (line a in Fig. 9). Polar UVI showed the formation of an eastwest oriented auroral arc at the poleward edge of the oval in the evening sector within the f-o-v of EISCAT (Fig. 7a). The structure was 3 MLT hours wide and lasted about 12 min from 17:26 UT to 17:38 UT. The brightening of the arc could be seen as an intensification of Polar UVI intensity (bottom panel of Fig. 9) The IMF B z had been mainly weakly southwards for several hours loading the magnetosphere (Fig. 5). This weak reconnection burst with associated localized auroral activation is a signature of release of a small amount of the energy stored into the magnetotail before the actual substorm about 40 min later at 18:07 UT. No estimates of the reconnection electric field in the early expansion phase could be obtained. The rather high E r values in the late expansion and early recovery phase between 18:58-19:17 UT were associated with rapid fading of bright aurora in the EISCAT local time sector at 19:00-19:03 UT while the most intense precipitation concentrated about an hour MLT westward of EISCAT (Fig. 7e). The activation is obviously an auroral poleward boundary intensification (PBI) at the poleward boundary of the oval. PBIs are transient nightside geomagnetic disturbances with a localized auroral signature that appear at the poleward boundary of the auroral oval and can then extend equatorward inside the auroral oval (Lyons et al., 1999;Zesta et al., 2002). PBIs occur during all levels of geomagnetic activity (Lyons et al., 1998). The intensification of E r which maximized at 19:41 UT (line g in Fig. 9) was associated with a localized poleward boundary intensification at the poleward boundary of the double oval (Elphinstone et al., 1995) in the EISCAT MLT sector (Fig. 7g). The UVI intensity curve showed a peak close to the E r maximum. The IMF B z had a southward excursion during the preceeding 15 min (Fig. 5). The PBI was about 1.5 h MLT wide and lasted roughly 8 min, 19:41-19:49 UT. The next E r maximum, which was also associated with a PBI, occured at 20:01 UT (line h in Fig. 9). The PBI was localized within about the same 1.5 h-wide MLT range as the previous PBI. The PBI lasted about 9 min, from 20:01 to 20:10 UT. Both E r and the averaged UVI intensity showed a very clear peak. The last clear intensification in the reconnection electric field with a PBI occured about an hour later, at 20:59 UT (line i in Fig. 9). Due to the gap in the UVI data, the duration of the PBI in Fig. 7i is not known, but the activation started before 21:01 UT and ended at about 21:10 UT. The width of the PBI was about 0.5 h MLT in the beginning, but it evolved quickly to multiple beads extending over 3 h MLT (Fig. 7j). During the maximum E r , UVI had a data gap, but a maximum in the averaged intensity was observed about 5 min later. From the second panel of Fig. 9 it can be seen that all of the enhancements in E r were associated with a poleward contracting polar cap boundary. The main factor in producing the calculated reconnection electric field was the poleward motion of the PCB in events a, h and i. The enhancement in equatorward plasma flow was mainly responsible for reconnection electric field in events f and g (Fig. 9, top panel). The latter case is in accordance with de la Beaujardiere et al. (1994) who observed that the flow rate across the poleward boundary of the aurora was increased significantly during the periods of PBIs. De la Beaujardiere et al. concluded that the feature is associated with a local increase in the reconnection rate (in accordance with the Eq. 3) with qualitatively estimated peak values of the order of 25 mV/m. Besides the PBIs in the 80-min wide MLT sector centered at EISCAT that are described above, PBIs occurred also in other MLT sectors. In the growth phase, several eastwest oriented PBIs appeared, typically between 21:00 and 22:00 MLT. The brightenings were about 2 MLT wide and lasted from 3 min to 15 min. In the recovery phase, a few additional PBIs could be distinguished. Summary and discussion We have studied some specific aspects of the reconnection process that can be related to high-latitude boundaries. The dynamics of the polar cap boundary in the evening sector during a substorm on 25 November 2000 were examined by using EISCAT incoherent scatter radar measurements. As a measure of the nightside reconnection rate, the local ionospheric reconnection electric field was estimated by the method introduced by Vasyliunas (1984), where the electric field is calculated by using plasma flow across the PCB. The plasma drift velocity was calculated from the dual-beam measurements of the EISCAT VHF radar, and the PCB was determined using EISCAT measurements on both the mainland and Svalbard by using the method by Aikio et al. (2006). The VHF PCB was compared with the poleward auroral emission boundary extracted from global optical images of the Polar UV Imager. On the average, the PAE boundary was located 0.7 • poleward of the EISCAT PCB in this event, though large scatter occured in individual points. The most probable cause for the difference in the UVI PAE and the VHF PCB locations is the wobble of the Polar satellite, which causes smearing of UVI images along the wobble direction. In this case, the wobbling was unfortunately in the direction in which the PAE boundary was determined (∼10:00-22:00 MLT). During one DMSP overflight, a close co-location of the b5e boundary (poleward edge of the oval) and the PAE boundary was found (the PCB was at too low a latitude at that moment for the EISCAT VHF to see it). The calculation of the 2-D plasma velocity vectors allowed the determination of the convection reversal boundary from the EISCAT data. A striking feature was the similar temporal evolution of the PCB (from T e data) and the CRB (from plasma velocity data), lending credence for the method of PCB determination. The CRB was observed to follow the motion of the PCB and to be located 0.5-1 • cgmlat equatorward of the PCB. The offset is consistent with the results by Sotirelis et al. (2005), who compared SuperDARN HF radar boundaries with DMSP particle boundaries. Sotirelis et al. found an equatorward offset of the CRB relative to the PCB that varies according to the local time from zero near noon to ∼1 • near dawn and dusk and is largest near midnight. In the early morning sector, offset as large as 3-4 • has been observed . The PCB-CRB offset is interpreted to result from a small viscous-like interaction between the magnetosheath and the low-latitude boundary layer, resulting in antisunward flow on closed field lines next to the polar cap boundary (Sotirelis et al., 2005, and references therein). The VHF PCB, the CRB, and the UVI PAE boundary were studied in a framework of the 1-D ionospheric equivalent east-west electrojets calculated from the MIRACLE magnetometer measurements by using the upward continuation method by Vanhamäki et al. (2003). During the substorm growth phase, all the boundaries showed a similar drift motion equatorwards on the poleward side of the eastward electrojet region. The UVI PAE boundary was generally located poleward of the VHF PCB with varying distances. The onset of the substorm expansion occured about 2 h MLT east of the observed local time sector and was associated with a sudden equatorward leap of 2 • of the EEJ region. After 31 min a dynamical electrojet pattern was observed to expand to the EISCAT MLT in association with intense auroral activity. The current pattern was formed by intense westward electrojet on the poleward side of the eastward electrojet and was in the f-o-v of EISCAT and MIR-ACLE for about 50 min. We interpret the current system to represent a dynamical "expansion-type" Harang discontinuity, after classification by Amm et al. (2000). Within the Harang discontinuity region, the UVI PAE and the VHF PCB boundaries moved rapidly poleward within the WEJ. A part of the calculated equivalent westward electrojet was flowing poleward of the boundaries. Later in the recovery phase, the boundaries generally moved equatorwards. After shifting to the post-midnight sector, the VHF PCB and the UVI PAE followed together the poleward edge of the WEJ. The separation of the CRB and the MCRB in the growth and the recovery phases is consistent with earlier evening sector studies where the MCRB was found to be typically located 1-2 • cgmlat equatorward of the CRB (Fontaine and Peymirat, 1996;Amm, 1998). In the post-midnight sector the MCRB was found to be located 1-2 • poleward of the CRB, which is close to the values of 0.5-1 • observed earlier by Amm et al. (2003). It is suggested that the shift is due to the field-aligned currents flowing in the vicinity of the HD (Amm, 1998;Amm et al., 2003). The calculated local ionospheric reconnection electric field was found to vary between 0 and 10 mV/m during the substorm growth phase. During the late expansion and recovery phases, values up to 40 mV/m were observed. The values are in agreement with earlier studies (de la Beaujardiere et al., 1991;Blanchard et al., 1997;Østgaard et al., 2005). During the latter period, the electric field showed also variations with periods of ∼7-27 min. Similar periods have been reported in previous studies and interpreted as variable reconnection occuring in the magnetotail (e.g. Østgaard et al., 2005;Aikio et al., 2008). Comparison of the reconnection electric field with the Polar UVI data showed a clear correlation between intensifications of E r and auroral poleward boundary intensifications (PBIs). The PBIs appeared within one minute of the ionospheric reconnection electric field maxima and lasted 5-12 min. The PBI-associated E r maxima values were 12 mV/m in the growth phase and 27 mV/m, 32 mV/m, 27 mV/m and 26 mV/m in the recovery phase. The widths of the PBIs were 3 h MLT in the growth phase and 1.5 h MLT in the recovery phase. For the last PBI in the late recovery/quiet phase the width was initially 0.5 h MLT but then the PBI evolved into beads covering about 3 h MLT. In all the five cases, the PCB contracted poleward, but in two cases during the substorm recovery phase, enhanced plasma flow velocity equatorward, across the PCB, was the main factor in producing the enhanced reconnection electric field. An enhanced southward plasma velocity in association with arc intensifications at the poleward auroral boundary have been observed also by de la Beaujardiere et al. (1994). It has been suggested that when the UVI intensity is high, there is a high ionospheric conductivity and significant frictional coupling between the ionospheric plasma and the neutral atmosphere. Then plasma flows are retarded and it is the PCB that moves. When the conductivity is low, ionospheric flows are excited rather than motions of the PCB (Boudouridis et al., 2008). In this study, the PCB motions were generally associated with bright emissions at the boundary and thus with high conductivities. However, the largest plasma flow in event g was also associated with rather intense emissions, in contradiction with Boudouridis et al. (2008). The other plasma flow event f was indeed associated with less intense emissions at the boundary. To make clear conclusions, more events should be studied. PBIs are generally considered as ionospheric signatures of longitudinally narrow earthward plasma sheet flow bursts, bursty bulk flows (BBFs) (Lyons et al., 2002, and references therein). For the source of PBIs and BBFs, two processes have been suggested: localized distant X line reconnection bursts (Sergeev et al., 2000) and global ULF pulsa-tion modes of the magnetosphere . Zesta et al. (2002) found the PBIs are either equatorward extending (N-S or E-W structures) or non-equatorward extending. They suggested that the equatorward extending north-south PBIs would be associated with longitudinally narrow BBFs whereas the wider east-west oriented structures could be associated with the global ULF modes. The non-equatorward extending PBI structures would be associated with a shear instability at the separatrix boundary. However, recently Zesta et al. (2006) deduced that every equatorward extending PBI structure they studied, including both north-south and eastwest structures, was associated with a fast flow channel in the tail within the same local time sector. Aikio et al. (2008) found periodic poleward expansions of the PCB which were associated with intensifications of the WEJ in the vicinity of the PCB in a recovery phase of a substorm. Since no signatures of global ULF waves were found, they suggested that enhanced reconnection bursts took place in the tail. In this study, at least one clear localized WEJ enhancement with a simultaneous poleward expanding PCB was observed after 20:00 UT, and by utilizing the dualbeam VHF measurements we were able to show that it indeed was associated with an enhanced reconnection electric field (line h in Fig. 9). The spatial resolution of the Polar UV Imager was not high enough in this case to tell whether or not the PBIs were narrow auroral structures that propagated equatorward. However, they all, including the last one with multiple beads, were associated with enhanced reconnection, which suggests that the PBIs were a consequence of temporarily enhanced longitudinally localized magnetic flux closure in the magnetotail.
10,404.8
2009-05-12T00:00:00.000
[ "Physics" ]
Large-Eddy Simulation on turbulent flow and plume dispersion over a 2-dimensional hill Abstract. The dispersion analysis of airborne contaminants including radioactive substances from industrial or nuclear facilities is an important issue for air quality maintenance and safety assessment. In Japan, many nuclear power plants are located at complex coastal terrains. In these cases, terrain e ffects on the turbulent flow and plume dispersion should be investigated. In this study, we perform Large-Eddy Simulation (LES) of turbulent flow and plume dispersion over a 2-dimensional hill flow and investigate the characteristics of mean and fluctuating concentrations. Introduction The dispersion analysis of airborne contaminants including radioactive substances from industrial or nuclear facilities is an important issue in air quality maintenance and safety assessment.In a flat terrain, the mean concentration of a plume can be easily predicted by using a Gaussian plume model.However, many nuclear power plants in Japan are located in complex coastal terrains.In this case, the effects of the terrain should be taken into consideration when predicting plume dispersion.Terrain effects on turbulent flow and/or plume dispersion have been investigated in many studies.For example, Jackson and Hunt (1975) proposed a theoretical model for the analysis of turbulent structures over a gentle hill.In one of the earliest wind tunnel experimental studies, Khurshudyan et al. (1981) conducted wind tunnel experiments of turbulent flow and plume dispersion over 2-dimensional hills with different slopes and investigated the influence of hill slope and source location on the flow and concentration fields.Castro and Snyder (1982) conducted wind tunnel experiments on plume dispersion over 3dimensional hills with various ratios of spanwise hill breadth to height and investigated the effect of hill shape on mean concentration distributions.Arya and Gadiyaram (1986) also conducted wind tunnel experiments on flow and dispersion in the wakes of 3-dimensional low hills and investigated the difference between the ground-level mean concentration of a plume released from a point source with and without the hill. Correspondence to: H. Nakayama (nakayama.hiromasa@jaea.go.jp)Sada (1991) investigated the streamwise variation of vertical profiles of mean wind velocity, turbulence kinetic energy, and mean concentration over a 2-dimensional gentle hill using wind tunnel experiments.In one of the earliest numerical studies, Hino (1968) conducted numerical experiments for plume dispersion over a complex terrain and showed the contaminant distribution patterns.Recently, Castro and Apsley (1997) performed numerical simulations of plume dispersion over 2-dimensional hills using the standard k-ε turbulence model and compared this with the experimental data. These previous studies have focused on the terrain effects on the characteristic of turbulent flow and/or mean concentration.Another issue related to plume dispersion is the potential problem of the accidental release of hazardous and flammable materials.During the assessment of the hazard to human health from toxic substances, the existence of high concentration peaks in a plume should be considered.For safety analysis of flammable gases, certain critical threshold levels should be evaluated.In such a situation, not only mean but also fluctuating concentrations should be estimated.In this study, we perform a numerical simulation of unsteady behaviors of turbulent flow and plume dispersion over a 2dimensional hill by using an LES that can give more detailed information on the flow and concentration fields in comparison with the wind tunnel experiments of Sada (1991) and investigate the characteristics of mean and fluctuating concentrations. Published by Copernicus Publications. Numerical model and computational settings The basic equations for the LES model are the spatially filtered continuity equation, Navier-Stokes equation and the transport equation for concentration.The subgrid-scale (SGS) Reynolds stress is parameterized by using the standard Smagorinsky model (Smagorinsky, 1963) with a Van Driest damping function (Van Driest, 1956), where the Smagorinsky constant is set to 0.12 (Iizuka and Kondo, 2004) for estimating the eddy viscosity.The subgrid-scale scalar flux is also parameterized by an eddy viscosity model and the turbulent Schmidt number is set to 0.5.Various SGS models for LES have been proposed besides the standard Smagorinsky model.For example, dynamic Smagorinsky models have been proposed by Germano et al. (1991), Lilly (1992), and Meneveau et al. (1996).However, Iizuka and Kondo (2004) examined the influence of various SGS models on the prediction accuracy of LESs of turbulent flow over a hilly terrain and showed that the prediction accuracy of LESs with the standard Smagorinsky model is better than that of the LESs with the dynamic Smagorinsky type models.This indicates that the dynamic Smagorinsky type models are not always effective for determining model constant.As a static type SGS model, Nicoud and Ducros (1999) proposed the walladapting local eddy-viscosity (WALE) model.This model can capture the effects of both the strain and the rotation rate of the small-scale turbulent motions without a damping function from the wall.According to Temmerman et al. (2003), the WALE model shows better performance when compared with the dynamic/standard Smagorinsky model.However, the conventional Smagorinsky model that has the advantage of simplicity and low computational costs is adopted in our LES model because the focus of our research is not the small order effects of turbulent flow. The coupling algorithm of the velocity and pressure fields is based on the Simplified Marker and Cell (SMAC) method with the second-order Adams-Bashforth scheme for time integration.The SMAC method is the algorithm for solving numerically the Navier-Stokes equation and was proposed by Amsden and Harlow (1970).The Poisson equation is solved by the Successive Over-Relaxation (SOR) method which is an iterative method for solving a Poisson equation for pressure.For the spatial discretization in the governing equation of the flow field, a second-order accurate central difference is used.For the dispersion field, Cubic Interpolated Pseudoparticle (CIP) method proposed by Takewaki et al. (1985) and a second-order accurate central difference method are used for the advection and diffusion terms, respectively.The time step interval ∆tU ∞ /H is 0.005 (∆t: time step). To perform LES of plume dispersion, a thick turbulent boundary layer (TBL) flow with strong velocity fluctuations should be simulated in order to mimic a plume released from an elevated point source.Various kinds of methods for producing a realistic TBL flow have been proposed.For example, Mochida et al. (1992) performed a preliminary LES of a channel flow for the inlet boundary condition.This method can be easily applied, but the flow is driven by a pressure gradient under the periodic boundary condition in the streamwise direction.Because it is known that the influence of a pressure gradient on TBL characteristics is large (Kline, 1967), it is desirable to reproduce an approaching flow without a pressure gradient. Figure 1 shows a schematic illustration of the computational regions for plume dispersion over a hill immersed in a fully-developed TBL flow.First, a spatially-developing TBL flow with strong velocity fluctuations in the driver region is generated by incorporating the inflow turbulence generation method proposed by Kataoka and Mizuno (2002) into an upstream small fraction of the driver region with a 2-dimensional roughness bar placed at the ground surface.Next, the inflow turbulence data obtained near the exit of the driver region is imposed at the inlet of the main region at each time step, and calculation of the turbulent flow and plume dispersion over a hill is performed.The total size and number of grid points for the computational regions are 81.5H×12.5H ×25.0H (H: hill height) with a Cartesian grid system and 810 × 120 × 100 in x-, y-, and z-directions, respectively.At the exit of the driver and main regions, a Sommerfeld radiation condition (Gresho, 1992) is imposed.At the top, a free-slip condition for streamwise and spanwise velocity components is imposed and the vertical velocity component is 0. At the side, a periodic condition is imposed and at the flat ground surface, a non-slip condition for each velocity component is imposed.Here, in the flow field, Immersed Boundary Method proposed by Fadlun et al. (2000) is used in order to consider the terrain effects.In the concentration field, a zero gradient is imposed at all the boundaries and a fractional volume term is introduced into the scalar conservation equation to account for the space occupied by terrain. A 2-dimensional hill with a mean slope angle of 20 • is placed in the main region.The position of the center of the hill is located at 12.5 H downstream of the inlet in the main region. The origin of the coordinates is the ground surface at the center of the hill.A release point of a tracer gas is located at a distance of 9.1 H from the center of the hill at an elevation of 0.45 H.The Reynolds number based on hill height and free-stream velocity (U ∞ ) at the hill height is almost 5000. Previous wind tunnel experiments for evaluation of the model performance Many wind tunnel experimental studies of the dispersion characteristics of a plume over a hilly terrain have been conducted.For example, Sada et al. (1991) investigated turbulence structures and dispersion characteristics of a plume over a 2-dimensional hill.The experiments were conducted under conditions of a neutrally stratified TBL flow in the wind tunnel of Central Research Institute of Electric Power Industry.The test section of the wind tunnel is 20-m long, 3-m wide and 1.5-m high.A TBL flow with strong velocity fluctuations from the ground surface to the upper level was generated using roughness elements with an L-shaped cross sections placed on the floor at the entrance of the wind tunnel section.The 2-dimensional hill was 0.11-m high, 0.6-m long, with a hill aspect ratio (ratio of hill length to height) of about 5.5.A release point was located at a distance of 1.0 m upstream of the center of a hill at a height of 0.05 m. The thickness of the TBL at the downstream position of the release point is 0.3 m.Using those parameters, the vertical profiles of mean wind velocity, turbulence kinetic energy and mean concentration over a 2-dimensional hill were obtained. In this study, in order to evaluate the model performance, we compare the LES data of turbulent flow and plume dispersion over a 2-dimensional hill with the wind tunnel experimental data by Sada et al. (1991), described above. Turbulence characteristics of approaching flow Figure 2 shows a comparison of LES results with the experimental data obtained by Sada (1991) for vertical profiles of mean wind velocity, turbulence intensities and Reynolds stress in the driver region.Each turbulence statistics obtained by an LES is found to be almost consistent with the experimental data.Therefore, it is considered that the TBL flow is successfully simulated. Turbulence characteristics over hill Figures 3 and 4 show a comparison of LES results with the experimental data (Sada, 1991) for vertical profiles of mean wind velocity (U/U ∞ ) and turbulence kinetic energy (k/U 2 ∞ ) at the positions of x/H = −2.7,0.0, 1.8, 3.6, 8.2, 11.8, 14.5 and 18.2.There are slight discrepancies of mean wind velocity between the LES results and the experimental data at x/H = 1.8 and 3.6.The values for the reattachment length behind a hill for the previous experiment (Sada, 1991) and the LES are L/H = 5.9 and 6.9 (L: reattachment length), respectively.Iizuka and Kondo (2006) also performed an LES with the standard Smagorinsky model for a 2-dimensional hill with a hill aspect ratio of 5.0 and compared it with the reattachment length of the experimental data of Ishihara et al. (2001).The results show that the computed reattachment the wake region.Figure 6 shows a comparison of LES results with the experimental data (Sada, 1991) for streamwise variations of the mean concentration near ground level.The mean concentration (C) is normalized by the free-stream velocity, hill height and source strength (Q).It is found that the basic characteristics such as the increase of mean concentration towards the crest, the rapid decrease behind a hill and the gradual decrease with downwind distance are almost similar to the experimental data.Figure 7 shows a comparison of the LES results with the previous experimental data for mean concentration at x/H = −4.5, −2.7, 0.0, 1.8, 4.5, 8.2, 11.8, 14.5 and 18.2.The mean concentration is normalized by the maximum of mean concentration (C max ) at each downstream position.The peak locations of the mean concentration obtained from the LES are lower than those obtained from the experiment, and the mean concentration values are overestimated in the wake region.However, the basic characteristics of plume dispersion such as the rapid vertical spread of a plume behind a hill and the formation of uniform mean concentration profiles with downwind distance are similar to the experimental data.Figure 8 shows streamwise variation of vertical profiles of r.m.s.concentration (c r.m.s. ).The r.m.s.concentration is normalized by the maximum of the r.m.s.concentration (c r.m.s.max ) at downstream position. Here, we show the r.m.s.concentration profiles obtained by the LES, since the fluctuating characteristic of concentration was not discussed in the previous experiment (Sada, 1991).We found that its values become small inside the wake region due to the smoothing effects of recirculating flows, compared with the mean concentration values. Conclusions In this study, we performed a numerical simulation of unsteady behaviors of turbulent flow and plume dispersion over a 2-dimensional hill immersed in a fully-developed TBL by using an LES and investigated the characteristics of mean and fluctuating concentrations.First, a spatially-developing TBL flow with strong velocity fluctuations in the driver region was generated by incorporating Kataoka's method (2002) into an upstream small fraction of the driver region with a 2-dimensional roughness bar placed at the ground surface.Then, we obtained the inflow turbulence data at the inlet of the main region, and calculated the flow and dispersion over a hill.As compared to the experimental data (Sada, 1991), the main characteristics such as complex behaviors of turbulent flow and plume dispersion behind a hill were successfully simulated.Furthermore, the r.m.s.concentration is found to be smaller than the mean concentration inside the wake region due to the smoothing effects of recirculating flows. Fig. 1 .Fig. 3 . Fig. 1.Schematic illustration of computational model.(a)Driver region for generation of a spatially-developing TBL flow and (b)main region for plume dispersion over hill. Figure 1 . Figure 1.Schematic illustration of computational model.(a) Driver region for generation of a spatially-developing TBL flow and (b) main region for plume dispersion over hill. Fig. 1 .Fig. 4 .Figure 2 . Fig. 1.Schematic illustration of computational model.(a)Driver region for generation of a spatially-developing TBL flow and (b)main region for plume dispersion over hill. Fig.Figure 3 . Fig. Streamwise variation of vertical profiles of mean wind velocity. Fig. 5 . Fig. 5. Instantaneous plume dispersion field.The yellow areas on the isosurface indicate 0.01% of initial concentration. Fig. 5 . Fig. 5. Instantaneous plume dispersion field.The yellow areas on the isosur indicate 0.01% of initial concentration. 4. 3 Fig. 7 . Figure5the shows instantaneous plume dispersion field at times t * (= tU ∞ /H) = 10.8, 21.6 and 43.2 after the plume release.It shows that the plume moves upward above the hill at first, then a portion of the plume is entrained into the wake region.At t * = 43.2, the plume is entirely dispersed in
3,449.6
2009-09-01T00:00:00.000
[ "Environmental Science", "Physics" ]
Vibration-based SHM for cultural heritage preservation: the case of the S. Pietro bell-tower in Perugia In the present work, multivariate statistical analysis techniques are newly applied in the field of condition assessment of cultural heritage structures. More specifically, the paper presents the design and the implementation of an SHM system for the bell-tower of the Basilica of San Pietro, one of the most relevant monuments of the city of Perugia, Italy. The system comprises three high-sensitivity accelerometers permanently installed on top of the tower and a remote server that automatically processes the data so as to acquire modal parameters and to use such information for novelty analysis and health assessment. In the paper, after a brief description of the permanent monitoring system installed on the structure and of the adopted SHM strategy, the results of the first months of continuous monitoring are presented. Later on, the potential capability of the aforementioned statistical techniques in damage detection is verified by using the continuously identified eigenfrequencies of the bell tower. Introduction Ambient Vibration Testing (AVT) and Operational Modal Analysis (OMA) are widespread and reliable methods of modal testing in civil engineering.While they are commonly carried out in flexible structures, such as bridges and cables, in recent years a particular attention has been also devoted to their application for the conservation of monumental buildings.In this regards, vibration-based structural health monitoring (SHM) systems have revealed in many applications their capability of enabling automated condition assessment of slender structures with a limited number of sensors [1][2][3][4][5][6], leading to a cost-effective optimization of maintenance activities.However, the development of SHM systems able to early detect and alert about the occurrence of structural anomalies still remains a challenge in many cases.Methods of multivariate statistical analysis, such as principal component analysis and novelty detection, may overcome this issue, and even if documented validations of their effectiveness at damage detection in full-scale structures are not yet available, their application, especially in bridge engineering, is becoming very popular.Equipping historical and monumental constructions with permanent SHM systems may lead to an optimal employment of the economic resources available for maintenance and rehabilitation activities, especially after seismic events.In fact, permanent SHM systems possess all the advantages of both AVT and OMA, being fully non-destructive and minimizing the interferences with the normal use of the structure but also allowing a continuous tracking of the actual condition of the structure, typically using a limited number of sensors.For these reasons, applications of vibration-based diagnostic and monitoring techniques to historic monumental buildings are becoming popular [7][8][9][10][11][12].The authors have recently started a research project for monitoring of two relevant historical constructions in Italy: the bell-tower of the Basilica of San Pietro in Perugia and the dome of the Basilica of Santa Maria degli Angeli in Assisi.This paper presents the vibration-based monitoring system of the bell-tower of the Basilica of San Pietro.An innovative technique, combining automated mode tracking, multiple data regression, principal component analysis and novelty detection is proposed for automatically revealing any anomaly in the structural behaviour.The results of about five months of monitoring demonstrate the ability of the system in revealing even small changes in the structural behaviour, possibly related to a developing damage pattern, and show a promise towards a more widespread and systematic implementation of vibration-based SHM systems for cultural heritage preservation. The San Pietro bell-tower The Basilica of San Pietro in Perugia is located in the southern part of the City.The abbey was erected in 996 while the first erection of the bell-tower (Figure 1) dates back to the 13th century.Throughout the centuries the bell-tower was subjected to several structural and architectural interventions.The actual configuration dates back to the 15th century and the design is attributed to the architect Bernardo Rossellino.Various structural interventions were necessary to repair damages caused by lightning shocks that several times threatened the stability of the structure.In the last years, the restoration and consolidation measures for the damages occurred after the strong Umbria-Marche earthquake of 1997 were completed.The Benedictine abbey consists of several architectural volumes, including the basilica, the convent and today other local institutions, arranged around three main cloisters.In this context, the bell tower stands out between the basilica and other branches of the abbey, with a total height of about 61.4 m.In the first 17 m the structure is restrained by the bordering buildings, so that the tower is free to move only in the last 45 m (Figure 2). The bell tower is constituted by a dodecagonal shaft in the first 26 m, a belfry with hexagonal cross section reaching an height of about 41 m and a cusp at the top.The constituent material is not homogeneous.The shaft is made of stone masonry, with large external portions realized in brick masonry as structural rehabilitation measures due to the occurrence of several damages.The belfry and the cusp are made of brick masonry, but the former is characterized by an external curtain of stones.Moreover the belfry exhibits high mullioned windows in each of the six sides, thus resulting in a significant slenderness degree in the upper part of the structure. Monitoring system and data analysis 3.1 AVT and modal characterization AVT and OMA of the bell-tower were performed on February 16th 2015 when a fairly strong wind was blowing.The AVT was carried out by using high sensitivity accelerometers located in two sections of the bell-tower (Figure 3): at the base of the cusp (40.8 m) and at the base of the belfry (29.1 m).Uni-axial accelerometers model PCB 393B12 (10 V/g sensitivity) were used and data were recorded by using a 24-channel system, carrier model cDAQ-9188 with NI 9234 data acquisition modules (24-bit resolution, 102 dB dynamic range and anti-aliasing filters).The data were downsampled at 100 Hz for storage purpose.Modal parameters of the bell-tower were extracted from AVT data by using 30-minutes long time histories and by application of a fully automated Stochastic Subspace Identification (SSI) technique [13].Table 1 summarizes the values of identified natural frequencies and corresponding modal damping ratios, where mode types are referred to the reference axes depicted in Figure 3. Identified mode shapes are shown in Figure 4. Monitoring system The continuous monitoring system comprises three high sensitivity uni-axial piezoelectric accelerometers (10 V/g sensitivity) installed at the base of the cusp, with the configuration depicted in Figure 5. The continuous monitoring data are recorded using the same data acquisition system used in the AVT, connected to a host PC located on site (Figure 6).Data are recorded at about 1600 Hz, down-sampled at 100 Hz and stored in separate files of 30 recording minutes.The recorded data are then sent through the internet to a remote server located in the Laboratory of Structural Dynamics of the Department of Civil and Environmental Engineering of University of Perugia, where they are processed through an had-oc developed MatLab code. Figure 7 shows the location of the bell-tower and its network connection to the laboratory.Figure 8 shows remote access for quality control of the acquired data during monitoring.The data processing code comprises the following steps: -pre-processing analysis for detecting and correcting spikes and other anomalies in the data. -Identification and removal of acceleration data under the excitation of the swinging bells. -Low-pass filtering and decimation of the data to 40 Hz. -Application of the fully automated SSI modal identification procedure. -Modal tracking based on a similarity check between estimated modal parameters. The dynamic monitoring started on December 9th 2014 and the data reported in this paper cover a continuous monitoring period lasting up to the end of March 2015.Within the same monitoring period, temperature and wind speed data recorded by a weather station located nearby the bell-tower are also available.As better explained in the following developments of the paper, these data have allowed to perform preliminary investigations on the correlations between modal parameters of the bell tower and environmental conditions. Damage detection methodology Within the permanent monitoring system, any change in the dynamic behaviour of the bell-tower, possibly related to some developing damage pattern, is automatically detected by application of statistical process control tools to the time histories of identified modal frequencies.The statistical process control tools adopted in the present study have a twofold purpose: (i) to remove the effects of changes in environmental and operational conditions from identified frequency time histories and (ii) to detect changes in the frequency data, in the form of statistical outliers, that arouse suspicion about the possible development of a structural damage. In the present study, the bulk of data stemming from the permanent monitoring system is first processed through the afore-presented automated modal identification and modal tracking procedures.Then, the classical techniques of Multivariate Linear Regression (MLR) and Principal Component Analysis (PCA) are combined in a single statistical tool which is adopted for removing environmental effects.Finally, a technique of novelty analysis is adopted for damage detection.The adopted statistical process control tools are described below. Tracked modal frequencies are collected in an n×Ndimensional observation matrix, Y, where N is the number of observations and n is the number of identified frequencies. The residual error matrix, E, is defined as follows for the purpose of damage detection: where Ŷ are modal frequencies independently estimated through a proper statistical model able to reproduce the variance associated with changes in environmental parameters.After computing matrix E in Eq. ( 1), a damage condition is identified as an anomaly in the residuals, under the assumption that damage induces a change in the distribution of E. To this aim, the classic statistical process control tool named Novelty Analysis is adopted.It basically consists of the use of control charts based on a properly defined statistical distance.In this application the T 2 -statistic is exploited, which is defined as where r is an integer parameter, referred to as group averaging size, E is the mean of the residuals in the subgroup of the last r observations, while E and Ȉ are the mean values and the covariance matrix of the residuals, respectively.Both quantities are statistically estimated in a reference period in which the structure is in the healthy state, which is called the training period.An anomaly in the data is identified in the form of an outlier, that is a value of the statistical distance which lies outside fixed control limits.In the present application, the lower control limit is 0, while the Upper Control Limit (UCL) is statistically computed as the value of T 2 corresponding to a cumulative frequency of 95% in the training period.In this way, if data collected in the training period are statistically meaningful (the training period is sufficiently long), there is approximately a 5% probability to observe an outlier when the structure is in the healthy state (false alarm).Conversely, if a relative frequency of outliers significantly greater than the 5% is steadily observed over time, a change in the statistical distribution of the residuals is supposed to have occurred, so denoting an anomalous structural condition not experienced during the training period.In the present application, the tools of MLR and PCA are combined to achieve an effective damage detection methodology.In particular, a MLR model is adopted at first and, then, PCA is applied to the residuals of MLR.Following this approach, the residual error matrix to be used in novelty analysis, Eq. ( 2), is computed as Analysis of monitoring data Figure 9 (a) shows time histories of identified natural frequencies of the bell-tower during about four months of monitoring.As shown from these results, the frequencies of all seven modes identified in the AVT are also consistently identified during monitoring, with the only exception of the fourth mode which is quite elusive, conceivably due to an insufficient level of excitation of this mode in operational conditions.A detailed view of the time evolution of the first two eigenfrequencies, shown in the plot of Figure 9 (b), clearly highlights daily fluctuations of the natural frequencies, which are conceivably due to changes in environmental conditions, primarily air temperature.This has been checked by looking at correlations between natural frequencies and temperature of the air measured from the weather station located nearby the tower.The results, shown in Figure 10, clearly demonstrate frequency-temperature correlations, where frequencies are seen to increase with increasing temperature, because of micro-cracks closing in the masonry driven by thermal expansion.It should be noticed that correlation coefficients between frequencies and temperature could be higher than those observed in Figure 10, if temperature of masonry instead of air temperature would be considered. Figure 11 shows the root mean square values of the accelerations recorded by the first sensor.The small levels of vibration are especially noteworthy, which is a consequence of the high stiffness of the structure which is slightly excited by wind.A little higher responses are observed when the excitation is provided by swinging bells, as shown in Figure 12. Numerical model and damage sensitivity With the purpose of investigating seismic vulnerability of the bell-tower, as well as frequency sensitivities to damage, a numerical finite element model of the structure has been constructed.The model, whose graphical sketch is depicted in Figure 13, is a linear elastic representation of the structure in its actual conditions, using threedimensional finite elements with orthotropic constitutive behaviour. After some modal sensitivity analysis, not reported here for the sake of brevity, the numerical model has been manually tuned in order to match the experimentally identified modal properties.The main elastic properties of the materials constituting the tower, as estimated after such a tuning, are summarized in Table 2.In general, obtained results are very similar to what suggested in codes and in the literature for regular stone masonry (shaft) and for mixed stone-brick masonry in the remaining parts of the tower.Table 3 summarizes identified and computed natural frequencies after manual tuning.These results highlight a good agreement between experimental results and numerical predictions, allowing to use the numerical model with some confidence.Mode shapes computed from the FE model are shown in Figure 14. The tuned finite element model has been used to investigate sensitivity of natural frequencies to damage. In particular, a damage pattern similar to the one observed after field surveys following the strong seismic event of 1997 has been considered.It consists of damages at the base and the top of the columns of the belfry due to flexural failure of the columns themselves.This type of damage has been equivalently modelled as a localized reduction in stiffness.To this aim, a damage parameter has been introduced which represents the reduction in Young's modulus and in shear modulus imposed in the critical regions within the columns.Values of damage parameter have been variated from one column to another.In particular, one column is the most damaged one (maximum value of the damage parameter), while remaining columns are progressively less damaged by increasing the distance from the most damaged column.The column that is opposite to the most damaged one is considered to be in the healthy state. The results of the damage sensitivity analysis of eigenfrequencies are shown in Figure 14.These results show that the most damage-sensitive frequency is the one of the third, torsional, mode.Progressively less sensitive are the frequencies of remaining modes.Two damage scenarios are considered in particular.The first one, called D1, represents the damage condition producing a relative variation in the most sensitive frequency equal to 0.5%, while the second one, called D2, corresponds to a 1.0% reduction in the frequency of the most sensitive mode.As shown in Figure 15, these damage conditions correspond to values of the damage parameter equal to 0.08 and 0.18, respectively, corresponding to 8% and 18% localized reductions in stiffness in the critical regions of the most severely damaged column. Rapid post-earthquake assessment The damage detection methodology presented in Section 3.2 has been applied to monitoring data in order to test its ability in automatically revealing the presence of a damage in the structure.Damage patterns D1 and D2 are considered for this purpose (see Section 4) as typical damage conditions occurring after a relatively small earthquake. Frequency shifts equal to those computed in Figure 15 have been artificially imposed to the time histories of the identified eigenfrequencies (Figure 9) starting from the 87 th day of monitoring.The frequencies identified in the first 80 days have been used for building the statistical models (MLR and PCA), as described in Section 3.3.The MLR model has been built by considering Rms amplitudes of the three measurement channels and the damping ratios of the first two modes as predictors, while three PCs have been retained in PCA. Figure 16 presents the control charts, in terms of the T 2 statistics, obtained for the two considered damage patterns.As shown in these plots, a significant increase in the number of outliers after the occurrence of the damage is observed in both D1 and D2 cases, where the percentage of outliers increases as expected with increasing damage severity.It is concluded, therefore, that the proposed SHM system enables rapid assessment of post-earthquake damages of the monitored bell-tower. It should be noticed that these results have been obtained with a training period of only 80 days, which does not allow to fully characterize fluctuations in natural frequencies of the bell-tower associated with seasonal changes in environmental conditions.It is expected, therefore, that increasing the number of available data sets will reduce the minimum level of damage that is detectable by the proposed SHM system. Conclusions This paper has presented the development and implementation of a continuous vibration-based SHM system for rapid post-earthquake assessment of the monumental masonry bell-tower of San Pietro in Perugia, Italy.This monitoring system comprises three acceleration sensors whose records are remotely processed through an automated output only modal identification procedure and by a statistical process control tool that removes environmental effects and detects anomalies in identified eigenfrequencies enabling a prompt damage detection.The analysis of the first four months of implementation of the system has clearly highlighted daily fluctuations of natural frequencies due to changes in temperature and, although with a still limited amount of data, has clearly demonstrated the ability of the proposed technique to allow damage detection. Figure 2 . Figure 2. Solid CAD model of the bell-tower. Figure 4 . Figure 4. Identified mode shapes of the bell-tower. Figure 6 . Figure 6.In-field data acquisition system. Figure 8 . Figure 8. Remote access to monitoring data. Figure 9 . Figure 9.Time histories of the first seven modal frequencies identified from the permanent monitoring of the San Pietro bell tower (a) and detailed view highlighting daily fluctuations of the frequencies of the first two modes (b). Figure 10 .Figure 11 .Figure 12 . Figure 10.Correlation coefficients between identified frequencies and air temperature (top) and detailed view of the time histories of air temperature and first modal frequency (bottom) Figure 16 . Figure 16.Control charts for automated damage detection (Out denotes the outliers' percentage after introducing the damage) Table 1 . Identified modal frequencies and damping ratios of the bell-tower. Table 2 . Elastic parameters of the materials assumed in the numerical model Table 3 . Comparison between identified and numerically predicted eigenfrequencies.
4,403
2015-01-01T00:00:00.000
[ "Engineering" ]
Efficient estimation of Pauli observables by derandomization We consider the problem of jointly estimating expectation values of many Pauli observables, a crucial subroutine in variational quantum algorithms. Starting with randomized measurements, we propose an efficient derandomization procedure that iteratively replaces random single-qubit measurements with fixed Pauli measurements; the resulting deterministic measurement procedure is guaranteed to perform at least as well as the randomized one. In particular, for estimating any $L$ low-weight Pauli observables, a deterministic measurement on only of order $\log(L)$ copies of a quantum state suffices. In some cases, for example when some of the Pauli observables have a high weight, the derandomized procedure is substantially better than the randomized one. Specifically, numerical experiments highlight the advantages of our derandomized protocol over various previous methods for estimating the ground-state energies of small molecules. We consider the problem of jointly estimating expectation values of many Pauli observables, a crucial subroutine in variational quantum algorithms. Starting with randomized measurements, we propose an efficient derandomization procedure that iteratively replaces random single-qubit measurements with fixed Pauli measurements; the resulting deterministic measurement procedure is guaranteed to perform at least as well as the randomized one. In particular, for estimating any L low-weight Pauli observables, a deterministic measurement on only of order log(L) copies of a quantum state suffices. In some cases, for example when some of the Pauli observables have a high weight, the derandomized procedure is substantially better than the randomized one. Specifically, numerical experiments highlight the advantages of our derandomized protocol over various previous methods for estimating the ground-state energies of small molecules. I. INTRODUCTION Noisy Intermediate-Scale Quantum (NISQ) devices are becoming available [39]. Though less powerful than fully error-corrected quantum computers, NISQ devices used as coprocessors might have advantages over classical computers for solving some problems of practical interest. For example, variational algorithms using NISQ hardware have potential applications to chemistry, materials science, and optimization [3, 7, 18-20, 27, 36, 38, 40]. In a typical NISQ variational algorithm, we need to estimate expectation values for a specified set of operators {O 1 , O 2 , . . . , O L } in a quantum state ρ that can be prepared repeatedly using a programmable quantum system. To obtain accurate estimates, each operator must be measured many times, and finding a reasonably efficient procedure for extracting the desired information is not easy in general. In this paper, we consider the special case where each O j is a Pauli operator; this case is of particular interest for near-term applications. Suppose we have quantum hardware that produces multiple copies of the n-qubit state ρ. Furthermore, for every copy, we can measure all the qubits independently, choosing at our discretion to measure each qubit in the X, Y , or Z basis. We are given a list of L n-qubit Pauli operators (each one a tensor product of n Pauli matrices), and our task is to estimate the expectation values of all L operators in the state ρ, with an error no larger than ε for each operator. We would like to perform this task using as few copies of ρ as possible. If all L Pauli operators have relatively low weight (act nontrivially on only a few qubits), there is a simple randomized protocol that achieves our goal quite * Electronic address<EMAIL_ADDRESS>efficiently: For each of M copies of ρ, and for each of the n qubits, we chose uniformly at random to measure X, Y , or Z. Then we can achieve the desired prediction accuracy with high success probability if M = O(3 w log L/ 2 ), assuming that all L operators on our list have weight no larger than w [15,21]. If the list contains high-weight operators, however, this randomized method is not likely to succeed unless M is very large. In this paper, we describe a deterministic protocol for estimating Pauli-operator expectation values that always performs at least as well as the randomized protocol, and performs much better in some cases. This deterministic protocol is constructed by derandomizing the randomized protocol. The key observation is that we can compute a lower bound on the probability that randomized measurements on M copies successfully achieve the desired error ε for every one of our L target Pauli operators. Furthermore, we can compute this lower bound even when the measurement protocol is partially deterministic and partially randomized; that is, when some of the measured single-qubit Pauli operators are fixed, and others are still sampled uniformly from {X, Y, Z}. Hence, starting with the fully randomized protocol, we can proceed step-by-step to replace each randomized single-qubit measurement by a deterministic one, taking care in each step to ensure that the new partially randomized protocol, with one additional fixed measurement, has success probability at least as high as the preceding protocol. When all measurements have been fixed, we have a fully deterministic protocol. In numerical experiments, we find that this deterministic protocol substantially outperforms randomized protocols [13,16,21,34,37]. The improvement is especially significant when the list of target observables includes operators with relatively high weight. Further performance gains are possible by executing (at least) linear-depth circuits before measurements [11,24,25,47]. Such procedures do, however, require deep quantum circuits. In contrast, our protocol only requires single-qubit Pauli measurements which are more amenable to execution on near-term devices. We provide some statistical background in Sec. II, explain the randomized measurement protocol in Sec. III, and analyze the derandomization procedure in Sec. IV. Numerical results in Sec. V show that our derandomized protocol improves on previous methods. Sec. VI contains concluding remarks. Further examples and details of proofs are in the appendices. II. STATISTICAL BACKGROUND Let ρ be a fixed, but unknown, quantum state on n qubits. We want to accurately predict L expectation values [X, X]. We can approximate each ω (ρ) by empirically averaging (appropriately marginalized) measurement outcomes that belong to Pauli measurements that hit o : It is easy to check that eachω exactly reproduces ω (ρ) in expectation (provided that h(o ; P) ≥ 1). Moreover, the probability of a large deviation improves exponentially with the number of hits. See Appendix B 1 for a detailed derivation. We call the function defined in Eq. (3) the confidence bound. It is a statistically sound summary parameter that checks whether a set of Pauli measurements (P) allows for confidently predicting a collection of Pauli observables (O) up to accuracy ε each. In particular, order log(L) randomized Pauli measurements suffice for estimating any collection of L low-weight Pauli observables. It is instructive to compare this result to other powerful statements about randomized measurements, most notably the "classical shadow" paradigm [21,37]. For Pauli observables and Pauli measurements, the two approaches are closely related. The estimators (2) are actually simplified variants of the classical shadow protocol (in particular, they don't require median of means for k = 1 to n do loop over qubits output P ∈ {X, Y, Z} n×M prediction) and the requirements on M are also comparable. This is no coincidence; information-theoretic lower bounds from [21] assert that there are scenarios where the scaling M ∝ log(L) max 3 w(o ) /ε 2 is asymptotically optimal and cannot be avoided. Nevertheless, this does not mean that randomized measurements are always a good idea. High-weight observables do pose an immediate challenge, because it is extremely unlikely to hit them by chance alone. IV. DERANDOMIZED PAULI MEASUREMENTS The main result of this work is a procedure for identifying "good" Pauli measurements that allow for accurately predicting many (fixed) Pauli expectation values. This procedure is designed to interpolate between two extremes: (i) completely randomized measurements (good for predicting many local observables) and (ii) completely deterministic measurements that directly measure observables sequentially (good for predicting few global observables). Note that we can efficiently compute concrete confidence bounds (3), as well as expected confidence bounds averaged over all possible Pauli measurements (5). Combined, these two formulas also allow us to efficiently compute expected confidence bounds for a list of measurements that is partially deterministic and partially randomized. Suppose that P subsumes deterministic assignments for the first (m − 1) Pauli measurements, as well as concrete choices for the first k Pauli labels of the m-th measurement, see Fig. 1 (center). Then . This formula allows us to build deterministic measurements one Pauli-label at a time. We start by envisioning a collection of M completely random n-qubit Pauli measurements. That is, each Pauli label is random and Eq. Crucially, Eq. (6) allows us to efficiently identify a minimizing assignment: Doing so, replaces an initially random single-qubit measurement setting by a concrete Pauli label that minimizes the conditional expectation value over all remaining (random) assignments. This procedure is known as derandomization [1,33,43] and can be iterated. Fig. 1 provides visual guidance, while pseudocode can be found in Algorithm 1. There are a total of n × M iterations. Step (k, m) is contingent on comparing three conditional expectation values E P Conf ε (O; P)|P , P[k, m] = W and assigning the Pauli label that achieves the smallest score. These update rules are constructed to ensure that (appropriate modifications of) Eq. (7) remain valid throughout the procedure. Combining all of them implies the following rigorous statement about the resulting Pauli measurements P . Theorem 2 (Derandomization promise). Algorithm 1 is guaranteed to output Pauli measurements P with below average confidence bound: We see that derandomization produces deterministic Pauli measurements that perform at least as favorably as (averages of) randomized measurement protocols. But the actual difference between randomized and derandomized Pauli measurements can be much more pronounced. In the examples we considered, derandomization reduces the measurement budget M by at least an order of magnitude, compared to randomized measurements. Furthermore, because Algorithm 1 implements a greedy update procedure, we have no assurance that our derandomized measurement procedure is globally optimal, or even close to optimal. V. NUMERICAL EXPERIMENTS The ability to accurately estimate many Pauli observables is an essential subroutine for variational quantum eigensolvers (VQE) [18,28,36,38,40]. Randomized Pauli measurements [15,21] -also known as classical shadows in this context -offer a conceptually simple solution that is efficient both in terms of quantum hardware and measurement budget. Derandomization can and should be viewed as a refinement of the original classical shadows idea. Supported by rigorous theory (Theorem 2), this refinement is only contingent on an efficient classical preprocessing step, namely running Algorithm 1. It does not incur any extra cost in terms of quantum hardware and classical post-processing, but can lead to substantial performance gains. Numerical experiments visualized in Ref. [21,Figure 5] have revealed unconditional improvements of about one order of magnitude for a particular VQE experiment [30] (simulating quantum field theories). In this section, we present additional numerical studies that support this favorable picture. These address a slight variation of Algorithm 1 that does not require fixing the total measurement budget M in advance. We focus on the electronic structure problem: determine the ground state energy for molecules with unknown electronic structure. This is one of the most promising VQE applications in quantum chemistry and material science. Different encoding shemes -most notably Jordan-Wigner (JW) [26], Bravyi-Kitaev (BK) [5] and Parity (P) [5,42] -allow for mapping molecular Hamiltonians to qubit Hamiltonians that correspond to sums of Pauli observables. Several benchmark molecules have been identified whose encoded Hamiltonians are just simple enough for an explicit classical minimization, so that we can compare Pauli estimation techniques with the exact answer. Fig. 2 illustrates one such comparison. We fix a benchmark molecule BeH 2 , a Bravyi-Kitaev encoding (BK) and plot the ground state energy approximation error against the number of Pauli measurements. The plot highlights that derandomization outperforms the original classical shadows procedure (randomized Pauli measurements) [21], locally-biased classical shadows [17], and another popular technique known as largest degree first (LDF) grouping [16,44]. The discrepancy between randomized and derandomized Pauli measurements is particularly pronounced. This favorable picture extends to a variety of other benchmark molecules and other encoding schemes, see Table 3. For a fixed measurement budget, derandomization consistently leads to a smaller estimation error than other state-of-the-art techniques. VI. CONCLUSION AND OUTLOOK We consider the problem of predicting many Pauli expectation values from few Pauli measurements. Derandomization [1,33,43] provides an efficient procedure that replaces originally randomized singlequbit Pauli measurements by specific Pauli assignments. The resulting Pauli measurements are deterministic, but inherit all advantages of a fully randomized measurement protocol. Furthermore, the derandomization procedure could accurately capture the fine-grained structure of the observables in question. Predicting molecular ground state energies based on derandomized Pauli measurements scales favorably and improves upon many existing techniques [15,16,37,44]. Source code for an implementation of the proposed procedure is available at [23]. Randomized measurements have also been used to estimate entanglement entropy [6,21,41,46], topological invariants [9,14], benchmark physical devices [8,12,21,29], and predict outcomes of physical experiments [22]. Derandomization provides a principled approach for adapting randomized measurement pro- [5] for different measurement schemes: The error for derandomized shadow is the root-mean-squared error (RMSE) over ten independent runs. The error for the other methods shows the RMSE over infinitely many runs and can be evaluated efficiently using the variance of one experiment [16]. Table 3: Average estimation error using 1000 measurements for different molecules, encodings, and measurement schemes: The first column shows the molecule and the corresponding ground state electronic energy (in Hartree). We consider the following abbreviations: derandomized classical shadow (Derand.), locally-biased classical shadow (Local S.), largest degree first (LDF) heuristic and original classical shadow (Shadow) [21] cedures to fine-grained structure and is closely related to an algorithmic technique -multiplicative weight update [2] -commonly used in machine learning and game theory. So far, we have only considered estimations of Pauli observables, but measurement design via derandomization should apply more broadly. We look forward to extension of derandomization in other tasks such as estimating non-Pauli observables and entanglement entropies, as well as improvements to the cost function f (W ) in Algorithm 1. Many near-term applications of quantum devices rely on repeatedly estimating a large number of low-weight Pauli observables. For example, low-energy eigenstates of a many-body Hamiltonian may be prepared and studied using a variational method, in which the Hamiltonian, a sum of local terms, is measured many times. Using randomized measurements, we can predict many low-weight observables simultaneously at comparatively little cost. It is known that a logarithmic number of randomized Pauli measurements allows for accurately predicting a polynomial number of low-weight observables [21]. This desirable feature provably extends to derandomized measurements. From Theorem 2 and Eq. (5), we infer that the measurement budget M = 4 log(2L/δ) max 3 w(o ) /ε 2 suffices to ensure that Algorithm 1 outputs Pauli measurements P that obey Conf ε (O; P) ≤ δ/2. With Lemma 1, we may convert this into an error bound: empirical averages (2) formed from appropriate measurement outcomes are guaranteed to obey |ω − tr(O o ρ)| ≤ ε for all 1 ≤ ≤ L with high probability (at least 1 − δ). This error bound is roughly on par with the best rigorous result about predicting local Pauli observables from randomized Pauli measurements [15]. But this argument implicitly assumes that Conf ε (O; P ) (which we can compute) is comparable to E P [Conf ε (O; P)] (which is characterized by Eq. (5)). This assumption is extremely pessimistic, because often Conf ε (O; P ) E P [Conf ε (O; P)]. If this is the case, derandomized Pauli measurements perform substantially better. Few global Pauli observables. We have seen that derandomized measurements never perform worse than randomized measurements. But they can perform much better. This discrepancy is best illustrated with a simple example: design Pauli measurements to predict both a complete Y -string (o 1 = [Y, . . . , Y ]) and a complete Z-string (o 2 = [Z, . . . , Z]). Here, randomized measurements are a terrible idea, because it is exponentially unlikely to hit either string by chance alone. Contrast this with derandomization. For the very first assignment (k = 1,m = 1), Algorithm 1 starts by computing three conditional expectations. Comparing them reveals f (Y ) = f (Z) < f (X) and the algorithm determines that assigning X is likely a bad idea. The two remaining choices should be equivalent and the algorithm assigns, say, P [1, 1] = Y . This initial choice does affect the expected confidence bound associated with the second Pauli label (k = 2,m = 1): f (Y ) < f (X) = f (Z). Taking into account the already assigned first Pauli label, both X and Z become equally unfavorable and the algorithm sticks to assigning P [2, 1] = Y . This situation now repeats itself until the first Pauli measurement is completely assigned: The algorithm has successfully kept track of an entire global Pauli string. It is now time to assign the first Pauli label of the second Pauli measurement (k = 1, m = 2). While X is still a bad idea, taking into account that we have already measured o 1 once also breaks the symmetry between Y and Z assignments: In words: measure both global observables equally often. Although statistically optimal, this measurement protocol is neither surprising nor particularly interesting. What is encouraging, though, is that Algorithm 1 has (re-)discovered it all by itself. Very many global Pauli observables (non-example): The derandomization algorithm is not without flaws. The greedy update rule in line 8 of Algorithm 1 can be misguided to produce non-optimal results. This happens, for instance, for a very large collection of global Pauli observables that appears to have favorable structure but actually doesn't. For instance, set o 1 = [X, . . . , X] and o = [Z;õ ], whereõ ∈ {X, Y, Z} n−1 ranges through all 3 n−1 possible Pauli strings of size (n − 1). There are L = 3 n−1 + 1 target observables, all of which are global and therefore incompatible. However, 3 n−1 of them start with a Pauli-Z label. This imbalance leads the algorithm to believe that assigning P [1, m] = Z for all 1 ≤ m ≤ M is always a good idea (provided that M is not much larger than 3 n−1 ). By doing so, it completely ignores the first target observable which starts with an X-label. But at the same time, it cannot capitalize on this particular decision, because observables o 2 to o L are actually incompatible. This results in an imbalanced output P that treats observables o 2 to o L roughly equally, but completely forgets about o 1 . Needless to say, the resulting confidence bound will not be minimal either. We emphasize that this highly stylized non-example is not motivated by actual applications. Instead it is intended to illustrate how greedy update procedures can get stuck in local minima. Now, suppose that o ∈ {I, X, Y, Z} n is another Pauli string that is hit by p (o p). Then, we can appropriately marginalize n-qubit outcome strings q ∈ {±1} n to reproduce ω(ρ) = tr (O o ρ) in expectation: Lemma 1 in the main text is an immediate consequence of this concentration inequality. Proof. The union bound -also known as Boole's inequality -states that the probability associated with a union of events is upper bounded by the sum of individual event probabilities. For the task at hand, it implies This allows us to treat individual deviation probabilities separately. Fix 1 ≤ ≤ L and note thatω is an empirical average of M = h(o ; P) random signs s that are independent each (they arise from different measurement outcomes). Empirical averages of independent signed random variables tend to concentrate sharply around their true expectation value Es Hoeffding's inequality makes this intuition precise and asserts for any ε > 0 The claim follows, because such an exponential bound is valid for each term in Eq. (B5). This also includes terms with zero hits (M = 0), because Pr [|ω − ω | ≥ ε] ≤ 1 = exp (−0/2) -and the claim follows. Derivation of Eq. (6) Note that each hitting count h(o ; P) = M m=1 1 {o p m } is a sum of M indicator functions that can take binary values each. This structure allows us to rewrite the confidence bound (3) as where ν = 1 − exp −ε 2 /2 ∈ (0, 1). Next, note that each remaining indicator function can be further decomposed into a product of more elementary indicator functions: Now, note that the exponent Appendix C: Details regarding numerical experiments We consider a molecular electronic Hamiltonian that has been encoded into an n-qubit system. The Hamiltonian can be written as a sum of Pauli observables. Each molecule is represented by a fermionic Hamiltonian in a minimal STO-3G basis, ranging from 4 to 16 spin orbitals. The 8-qubit H 2 example is represented using a 6-31G basis. The fermionic Hamiltonian is mapped to a qubit Hamiltonian using three different common encodings: Jordan-Wigner (JW) [26], Bravyi-Kitaev (BK) [5] and Parity (P) [5,42]. The Pauli decomposition considered here has already been featured in many existing works; see [4,17,27] for more details. In our numerical experiments, the measurement procedure is applied to the exact ground state of the encoded n-qubit Hamiltonian H: The ground state |g is obtained by exact diagonalization using the Lanczos method, see e.g. [31] for a recent survey. We focus on root-mean squared error (RMSE) to quantify the measurement error. For M independent repetitions of the measurement procedure giving rise to M estimatesÊ 1 , . . . ,Ê M , the RMSE is given by: where E GS is the exact ground state electronic energy tr(Hρ) = ψ| H |ψ . We consider the ground state electronic energy of the molecule without the static Coulomb repulsion energy between the nuclei. Hence the total ground state energy of the molecule is the sum of the ground state electronic energy and the static Coulomb repulsion energy (Born-Oppenheimer approximation). We do not focus on the static Coulomb repulsion energy because it is not encoded in the molecular electronic Hamiltonian H and is considered to be a fixed value. We elaborate the alternative measurement procedures with which we compared our derandomized procedure. LDF grouping: The largest-degree-first (LDF) grouping strategy and other heuristics have been considered and investigated in [45]. The conclusion is that the LDF grouping strategy results in good performance (differing from the best heuristics by at most 10%) and is generally recommended. The measurement error (RMSE) of LDF grouping strategy can be computed exactly given an exact representation of the ground state |g ; see [17] for details. 2. Classical shadow : The measurement procedure measures each qubit in a random X, Y, Z Pauli basis. This procedure is known to allow estimation of any L few-body observables from only order log(L) measurements [10,15,21]. However, the performance would degrade significantly when we consider many-body observables. Hence, this approach will likely perform less well for molecular Hamiltonians due to the presence of many high-weight Pauli observables. 3. Locally-biased classical shadow : This is an improvement over classical shadows, proposed by [17], designed to overcome disadvantages in estimating the expectation of many-body observables. The idea is to bias the distribution over different Pauli bases (X, Y or Z) for each qubit to minimize the variance when we measure the quantum Hamiltonian given in Equation (C1). Ref. [17] demonstrated that this approach would yield similar or better performance compared to LDF grouping and outperforms classical shadows. In what follows, we provide a detailed description of the cost function used to derandomize the single-qubit Pauli observables for our numerical experiments. In Algorithm 1, we used the cost function The conditional expectation is given by Eq. (6) and is restated here for convenience where η, ν > 0 are hyperparameters that need to be chosen properly. In the numerical experiments, we consider η = 0.9 and ν = 1 − exp(−η/2). The larger V (o , P ) is, the lower the single-observable cost function exp −V (o , P ) will be. The following discussion provides an intuitive understanding for the role of the two terms in V (o , P ). When the entire set of M measurements has been decided, V (o , P ) will consist only of the first term and is proportional to the number of times the observable o has been measured. For quantum chemistry applications, the coefficients of different Pauli observable are different, e.g., in Eq. (C1), the Hamiltonian H consists of Pauli observable P with varying coefficients α P . In such a case, one would want to measure each Pauli observable o with a number of times proportional to |α o | [32]. In order to include the proportionality to |α o |, we consider the following modified cost function that depends on the coefficients α, The definition of V (o , P ) is given in Eq. (C8). Recall that V (o , P ) will be proportional to the number of times the observable o has been measured, hence the weight factor w o will promote the proportionality of V (o , P ) to w o ∝ |α o |. While the cost function is derived from derandomizing the powerful randomized procedure [21], it is not clear if this is the optimal cost function. We believe other cost functions that are tailored to the particular application could yield even better performance; we leave such an exploration as goal for future work.
5,966.8
2021-03-12T00:00:00.000
[ "Mathematics" ]
Four-Layer Surface Plasmon Resonance Structures with Amorphous As2S3 Chalcogenide Films: A Review The paper is a review of surface plasmon resonance (SPR) structures containing amorphous chalcogenide (ChG) films as plasmonic waveguides. The calculation method and specific characteristics obtained for four-layer SPR structures containing films made of amorphous As2S3 and As2Se3 are presented. The paper is mainly based on our previously obtained and published scattered results, to which a generalized point of view was applied. In our analysis, we demonstrate that, through proper choice of the SPR structure layer parameters, we can control the resonance angle, the sharpness of the SPR resonance curve, the penetration depth, and the sensitivity to changes in the refractive index of the analyte. These results are obtained by operating with the thickness of the ChG film and the parameters of the coupling prism. Aspects regarding the realization of the coupling prism are discussed. Two distinct cases are analyzed: first, when the prism is made of material with a refractive index higher than that of the waveguide material; second, when the prism is made of material with a lower refractive index. We demonstrated experimentally that the change in reflectance self-induced by the modification in As2S3 refractive index exhibits a hysteresis loop. We present specific results regarding the identification of alcohols, hydrocarbons, and the marker of E. coli bacteria. Introduction Plasmonics is a research field that explores the confinement of the electromagnetic field over dimensions of the order of the wavelength.The basic phenomenon responsible for spatial subwavelength confinement of the electromagnetic field is the interaction between the electromagnetic radiation and the conduction electrons of the metallic interfaces or metallic nanostructures, leading to an enhanced optical near field. The dimensions of the photonic devices are restricted by the diffraction limit, which is of the order of half the wavelength for conventional optical devices.Accordingly, the diffraction limit is of the order of 250 nm in the visible domain, which is approximately an order of magnitude larger than the electronic component's dimension (~10 nm).To overcome the diffraction limit, metal-insulator structures were developed, which may confine the light near the interface at dimensions shorter than the optical wavelength due to the surface plasmons (which represent waves of the electric charge density at the metal-insulator interface) [1].However, the light attenuation in plasmonic structures is large, so the propagation distance is of the order of mm due to the forced oscillations of the free electrons under the electromagnetic field. The plasmon propagation can be used for signal routing along the conductive nanowires.Since the surface plasmons do not involve electric charge displacement, the effects of inductance and capacity (which reduce the performance of the integrated circuits) do not occur.There are many specialized monographs in the literature [2][3][4][5][6] on problems linked to plasmonics. The proposal of Kretschmann to use prism coupling [7] very soon led to the development of plasmonic sensors [8][9][10], the results being particularly impressive for biological sensors [11][12][13], which are selective to analytes.Other interesting types of sensors were described in [14,15]. Fundamentals of the plasmons in multilayer structures can be found in Maier's book [16].Davis [17] proposed the matrix method for the calculation of light interaction with multilayer structures made of metals and insulators.Other authors [18][19][20] used the matrix method to determine the resonance characteristics.Economou [21] and Burke et al. [22] derived the dispersion relation.This was analyzed for different multilayer configurations, but the solutions were obtained for special symmetry structures only.Opolski [23] performed numerical simulations of the plasmon resonances in planar structures.However, the matrix method does not enable the calculation of the electromagnetic field distribution in the structure layers. Chalcogenide (ChG) materials have high transparency in the IR region as well as a high refractive index [24].A surface plasmon resonance (SPR)-based biosensor using the amorphous ChG material Ge 20 Ga 5 Sb 10 S 65 (called 2S2G) as a coupling prism [25] enabled a sensing limit for refractive index variation of 5 × 10 −5 . Chalcogenide materials offer the possibility of improving the characteristics of conventional SPR sensors.Thus, in [26], a chalcogenide glass-based sensor applicable in the IR region was studied.The use of Al film for the chalcogenide glass sensor increased the intrinsic sensitivity by almost 400% as compared with an Au film-based sensor, the material most commonly used in the visible region.The design of IR sensors is an actual task due to the existence of gas absorption lines in the IR spectral domain. In [25], numerical simulations were carried out to investigate the potentialities of sulfide glass systems as coupling prism materials.An SPR biosensor was set up by using angular interrogation.The calculations performed showed that the detection limit of the sensor was 3 × 10 −5 RIU.The development of SPR sensors with a chalcogenide thin-film layer was proposed in [27] for bio-applications.The film, accompanied by a graphene layer, demonstrated selective adsorption.Recent studies established plasmon-enhanced photo-stimulated diffusion of silver into GeSe 2 -based chalcogenide thin films [28].Another study [29] demonstrated a significant increase in the diffraction grating characteristics formed on an inorganic ChG photoresist due to surface plasmons. Amorphous ChG manifests new characteristics when configured as a metamaterial.In [30], the authors experimentally demonstrated that a nanostructured chalcogenide glass can efficiently generate third-harmonic radiation, leading to a strong UV light source at the nanoscale due to phase locking, despite ordinarily high optical absorption in this region.Later, the authors [31] demonstrated a two-order-of-magnitude improvement of third harmonics in stacked three-layer chalcogenide metasurfaces with respect to a single layer. Theoretical calculations [32] for permittivity were performed using the Maxwell-Garnett model for composite materials containing metal (Ag, Al) nanoparticles in a dielectric medium.This method paved the way for the realization of new metamaterials, i.e., materials with a negative refractive index.This does not require a periodic distribution of particles, making it very technologically applicable.It was shown that the permittivity can be engineered by changing the inclusion coefficient, which was too small (around 0.05).ChG materials are very promising as host materials because they provide low optical losses. Another SPR experiment was presented in [33], where the photoinduced modifications in amorphous As 2 Se 3 [34] were detected due to plasmonic resonance enhancement.Lightto-light modulation was demonstrated in an SPR configuration using ChG materials as the active medium and a rutile coupling prism [35].As the authors mentioned, the SPR resonance dip was only observed for p-polarized light.We expect that more complex resonance phenomena will occur when thin ChG films with a high refractive index are used as the sensing medium in multilayer SRP configurations.This review analyzes SPR in multilayer structures, with a focus on four-layer structures that act as planar waveguides.Currently, several SPR-based tools are available on the market.All of them, whether chemical or biological, are based on the three-layer Kretschmann configuration [7].Four-layer structures containing amorphous ChG materials open up new possibilities in terms of manipulation with the degree of confinement, sensitivity to refractive index changes, and depth of field.Minimization of the optical loss of the structure is achieved by proper selection of the ChG film material and metal film thickness, accounting for the working laser wavelength.The increased refractive index of the ChG film is essential to achieving better field confinement near the surface.Films composed of As 2 S 3 or As 2 Se 3 are considered reference materials.They have a high refractive index (2.45-3.0),low optical losses in the transparency band, and can be easily obtained on large metal or dielectric surfaces by vacuum deposition techniques. The opportunity for chalcogenide photonics was traced in the review of Eggleton and co-authors [36,37].The papers summarized progress in photonic devices that exploit the optical properties of chalcogenide glasses.They identified the most promising areas, such as mid-infrared sensing, integrated optics, and ultrahigh-bandwidth signal processing. More recently, the authors [38] have written a review analyzing the concept of new lab-on-chip devices that exploit acousto-optic interactions to create lasers, amplifiers, and other photonics devices.The chalcogenide materials are characterized by a record-high acousto-optic coefficient.Just recently, the authors of [39] presented the road map for emerging photonic technology platforms.Chalcogenide semiconductors play a critical role in this architecture.The first demonstration of on-chip devices using an As 2 S 3 planar waveguide was presented. The physical bases related to the interaction of light with plasmon polaritonic waves are elucidated briefly in Section 2. Section 3 presents the transfer matrix method used to calculate resonance curves in multilayer planar structures, particularly the four-layer structure.The results with specific calculations are presented in Section 4. Section 5 presents numerical simulations using the characteristic equation or complete solver to calculate the field distribution in a four-layer structure, with some discussions on the topic of plasmonic resonance.Nonlinear effects in the structure of SPR containing a film of amorphous ChG compounds (As 2 S 3 , As 2 Se 3 ) are presented in Section 6.The photo-induced phenomena known in these materials are amplified by the phenomenon of resonance, which promises some applications in active photonic devices.Finally, in Section 7, the main properties of the four-layer structure are presented, which concern the sensor's applications (Section 7.1 for alcohol identification and Section 7.2 for E. coli detection).Discussions and conclusions are given in Sections 8 and 9. Surface Plasmon Resonance (SPR) in a Three-Layer Configuration: Basic Features The SPR configuration employs the total internal reflection of the light at the prism base.The evanescent field extends through the thin metal film deposited on the base, and it couples with the plasmon polariton wave created at the external metal interface of the metal film.The resonant coupling of the light occurs when the phase velocity of the light parallel to the surface is equal to the velocity of the plasmon, so that surface plasmon polariton (SPP) waves can propagate along the interface between the conductor and dielectric. The Maxwell equations describing the SPP propagation reduce to a system of coupled Helmholtz wave equations when considering harmonic oscillations, one per layer of the structure.They can be solved for different geometries [16,40,41].In the case when the planar structure lies in the xy plane, which designates the interface of layers, the equations correspond to the one-dimensional case.Every Helmholtz equation is still vectorial and, in the general case of an arbitrary polarization of the incident light, is a system of several equations.The number of equations corresponds to the number of layers.The introduction of the concepts of TM and TE mode allows the reduction of the system to a scalar equation.Finally, the following expression for the propagation constant was obtained [16]: The conditions required for SPP wave coupling at the metal-dielectric interface are as follows [40]: The electromagnetic fields decrease exponentially with distance from both interfaces.The depth of penetration is on the order of 250-300 nm in dielectrics (half of the approximate wavelength).In metals, the field penetration is on the order of 10-15 nm due to high absorption.So, a coupled state of plasmon polariton and electromagnetic waves propagate along the metal-dielectric interface. The optical constants of metals and their dispersion play an important role in ensuring the conditions for the propagation constant to be real.The metal-dielectric interface supports SPP waves only for such metals that have a large enough value of the dielectric constant.The Drude plasma model was formulated in order to calculate the dielectric constant.In this model, the macroscopic polarization leads to the following dispersion relation for permittivity: Here, ω p is the plasma frequency.The relative permittivity is complex ε(ω) = ε r (ω) + i ε r (ω), where ε r is the real and ε r is the imaginary component.For large light frequencies (i.e., ω γ, but ω < ω p ), the relative permittivity ε(ω) given by Equation (2) becomes real and negative: The relative permittivity is generally calculated from the optical constants n and k as ε(ω) = (n − ik) 2 .Tabulated values for the optical constants are given in Palik's Handbooks [42,43].Examples of optical constants for usual metals employed in plasmonic experiments were obtained by extrapolation of Palik's data to the used wavelengths (see Table 1).From these data, the dielectric constants, which are complex, can be calculated.Aluminum has higher values of the extinction coefficient, which leads to stronger attenuations of SPP waves.The low value of the extinction coefficient is the reason for choosing noble metals such as gold or silver for building plasmonic structures. The analysis concept presented above leads to the formula for the plasmon propagation constant β.To excite the SPP waves, the phase-matching conditions for energy and momentum must be fulfilled.Since the propagation constant β is greater than the wave vector k of the light in the dielectric, the realization of phase-matching conditions is not a trivial problem.Kretschmann and Raether [7] demonstrated that phase-matching conditions can be achieved by using a thin metal film in a three-layer configuration.In this configuration, the top semi-infinite medium is made of a dielectric with a refractive index higher than the bottom one, and the SPP wave can be excited at the bottom metal interface via evanescent waves.Figure 1a presents a three-layer structure with BK7 glass with a refractive index of 1.51 (the coupling prism) at the top and air with a refractive index close to unity or water solutions with a refractive index close to 1.33 (the ambient medium) at the bottom of the metallic film. momentum must be fulfilled.Since the propagation constant β is greater than the wave vector k of the light in the dielectric, the realization of phase-matching conditions is not a trivial problem.Kretschmann and Raether [7] demonstrated that phase-matching conditions can be achieved by using a thin metal film in a three-layer configuration.In this configuration, the top semi-infinite medium is made of a dielectric with a refractive index higher than the bottom one, and the SPP wave can be excited at the bottom metal interface via evanescent waves.Figure 1a presents a three-layer structure with BK7 glass with a refractive index of 1.51 (the coupling prism) at the top and air with a refractive index close to unity or water solutions with a refractive index close to 1.33 (the ambient medium) at the bottom of the metallic film. The resonance conditions require that the propagation constant β (Equation ( 1)) be equal to the propagation constant of light tangential to the surface: The incidence angle is measured from the normal to the interface.From the experimental curve, which corresponds very well to the calculated one, we can see in Figure 1b that the angle incident to the SPR interface is near 45°.Due to refraction restrictions, this resonance angle can be obtained when the light beam is directed normally to an optical prism with a 90-degree angle.Small adjustments are made by fine rotations of the table since the resonance angle is very sharp, on the order of tenths of a degree. The Formula (4) for the propagation constant assumes the metal film to be thick.When the film thickness decreases, the SPP's modes will couple to each other, producing a shift in the resonance angle that was first calculated by Kretschmann [44].The shift of the resonance curve was still small when the film thickness was ~50 nm.This one corresponds to a drop in reflectance near zero for probe light of 633 nm wavelength. In the book of Sophocles [45], the solutions for three-layer plasmonic waveguide structures are presented.The analytical expression for the reflectivity represents the Airy formula for the three-layer structure [46].Abeles's 2 × 2 matrix approach [47] may be employed for the calculation of the reflectivity of a multilayer structure.The resonance conditions require that the propagation constant β (Equation ( 1)) be equal to the propagation constant of light tangential to the surface: The incidence angle θ is measured from the normal to the interface.From the experimental curve, which corresponds very well to the calculated one, we can see in Figure 1b that the angle incident to the SPR interface is near 45 • . Due to refraction restrictions, this resonance angle can be obtained when the light beam is directed normally to an optical prism with a 90-degree angle.Small adjustments are made by fine rotations of the table since the resonance angle is very sharp, on the order of tenths of a degree. The Formula (4) for the propagation constant assumes the metal film to be thick.When the film thickness decreases, the SPP's modes will couple to each other, producing a shift in the resonance angle that was first calculated by Kretschmann [44].The shift of the resonance curve was still small when the film thickness was ~50 nm.This one corresponds to a drop in reflectance near zero for probe light of 633 nm wavelength. In the book of Sophocles [45], the solutions for three-layer plasmonic waveguide structures are presented.The analytical expression for the reflectivity represents the Airy formula for the three-layer structure [46].Abeles's 2 × 2 matrix approach [47] may be employed for the calculation of the reflectivity of a multilayer structure. Transfer Matrix Method for Calculating the SPR Resonance Curves in Multilayer Configurations For multilayer configurations, it is not possible to obtain an explicit solution for the reflectivity.The reflectivity can be obtained for the general N-layer case in terms of characteristic transfer matrices [47]: Calculations may be performed for both p and s polarizations.The following notations can be used: 6) In the above notations, s and p designate the polarization, while j is the layer's number.For each layer number j, the transfer matrices M pj and M sj are calculated: Next, the total transfer matrices T p and T s are calculated: Finally, the reflectance R p and R s are calculated: R p = (Tp(0,1)qpN+TP(0,0))qp1−(Tp(1,1)qpN+TP(1,0)) (Tp(0,1)qpN+TP(0,0))qp1+(Tp(1,1)qpN+TP(1,0)) R s = (T s (0,1)q sN +T s (0,0))q s1 −(T s (1,1)q sN +T s (1,0)) (T s (0,1)q sN +T s (0,0))q s1 +(T s (1,1)q sN +T s (1,0)) (10) SPR computations were realized by using scripts written in MATLAB: the first script enables the calculation of the structure reflectivity as a function of the incidence angle and the determination of the resonance angle.The second script enables us to calculate the structure's reflectivity as a function of the film's refractive index. Four-Layer SPR Configuration with Amorphous ChG Film: Simulation Results The plasmonic resonance structure proposed by Kretschmann was developed and found applications as the newest photonic devices.Several companies have realized and commercialized successful optical sensors based on SPR.These conventional schemes basically represent a three-layer configuration: (1) a semi-infinite prism made of oxide glass; (2) a gold film with a thickness of 40-50 nm; and (3) an ambient environment, which represents solutions of various chemicals in water. The properties of a plasmonic resonant structure can change drastically if a thin film of high refractive index is deposited on the metallic film.The structure is transformed into a four-layer configuration, which contains a transparent dielectric film as a waveguide (Figure 2a). The refractive index and film thickness are parameters that ultimately determine the sensitivity of sensors.Amorphous ChG materials are a good candidate for this purpose because they can be deposited on a wide range of substrates.In addition, the nonlinear effects and the photoinduced change in refractive index known in these materials may lead to the development of new photonic devices.The incorporation of these materials in resonant structures leads to considerable amplification of the effects of interaction with light.(2) a gold film with a thickness of 40-50 nm; and (3) an ambient environment, which represents solutions of various chemicals in water. The properties of a plasmonic resonant structure can change drastically if a thin film of high refractive index is deposited on the metallic film.The structure is transformed into a four-layer configuration, which contains a transparent dielectric film as a waveguide (Figure 2a).The refractive index and film thickness are parameters that ultimately determine the sensitivity of sensors.Amorphous ChG materials are a good candidate for this purpose because they can be deposited on a wide range of substrates.In addition, the nonlinear effects and the photoinduced change in refractive index known in these materials may lead to the development of new photonic devices.The incorporation of these materials in resonant structures leads to considerable amplification of the effects of interaction with light. In Figure 2b, the substrate (3) with the deposited gold film (4) and the ChG film (5) constitute a planar plasmonic chipset.In photonic devices the chipset is attached to the prism permanently by an adhesive.During the use of these structures as sensors, the surface with the deposited thin films often deteriorates.In this case, the chipset is attached to the prism base using immersion oil.The chipset can be changed, but the prism remains the same. Below, we describe the calculations and analysis of the results obtained based on the method presented above.The influence of various parameters of materials on the characteristics of the SPR structure was analyzed.Simulation considered the thickness variation of metal and dielectric layers in a four-layer structure: gallium phosphide (material of the coupling prism)-Au (metal layer)-As2S3 (dielectric layer)-air.We considered three thicknesses of the Au film (i.e., 40, 45, or 50 nm) and four thicknesses of 300, 500, 700, and 1000 nm for As2S3.The refractive index of GaP (n = 3.1) is higher than the As2S3 refractive index (n = 2.45).Figure 3 presents 3D mappings for p-polarized reflectance Rp.The calculations are carried out for usual laser sources such as laser diodes or DPSS lasers and for incidence angles θ ranging between 10° and 80°.In Figure 2b, the substrate (3) with the deposited gold film (4) and the ChG film (5) constitute a planar plasmonic chipset.In photonic devices the chipset is attached to the prism permanently by an adhesive.During the use of these structures as sensors, the surface with the deposited thin films often deteriorates.In this case, the chipset is attached to the prism base using immersion oil.The chipset can be changed, but the prism remains the same. Below, we describe the calculations and analysis of the results obtained based on the method presented above.The influence of various parameters of materials on the characteristics of the SPR structure was analyzed.Simulation considered the thickness variation of metal and dielectric layers in a four-layer structure: gallium phosphide (material of the coupling prism)-Au (metal layer)-As 2 S 3 (dielectric layer)-air.We considered three thicknesses of the Au film (i.e., 40, 45, or 50 nm) and four thicknesses of 300, 500, 700, and 1000 nm for As 2 S 3 .The refractive index of GaP (n = 3.1) is higher than the As 2 S 3 refractive index (n = 2.45).Figure 3 presents 3D mappings for p-polarized reflectance Rp.The calculations are carried out for usual laser sources such as laser diodes or DPSS lasers and for incidence angles θ ranging between 10 The blue color means that the value of the field is close to zero.For a given wavelength λ, multiple resonant angles θ are possible due to the high refractive index of the GaP prism.The resonance angle corresponds to the dip in reflectivity.As shown in the figure, resonant angles can be in a wide range, from 20 • to 60 • .The resonance curve is sharper for angles close to 20 • , and a narrower resonance means a higher quality factor. We carried out calculations for two wavelengths of probe light that are usually employed in optical fiber communications: 1310 nm and 1550 nm.The results are presented in Table 2 and Figure 4 for the 1310 nm wavelength and in Table 3 and Figure 5 for the 1550 nm wavelength.The transfer matrix approach is an efficient and simple tool for the design of SPR structures.The computer simulations are fast.However, the method does not enable the calculation of the field distribution or the wave attenuation. Characteristic Equation Method For such targeted applications, we have developed the characteristic equation; however, it is unfortunately a transcendental type of equation, which requires more computing time.The characteristic equation must be solved numerically using, for example, MATLAB to find the real and imaginary parts of the propagation constant.The structure parameters working at a laser wavelength of 633 nm are as follows: a BK7 glass prism, a metallic layer of Au (gold), a ChG layer of As 2 S 3 , and lastly, a semi-infinite layer of air.The Au film thickness is 50 nm, and the ChG film thickness d may vary between 200 and 1600 nm.The refractive index of the ChG film was considered to be 2.45.The optical constants of Au were taken from the paper by Rakic [48], and the refractive index at 633 nm wavelength is n = 0.19 − 3.25i.The real part of the effective refractive index N eff = β/k 0 , where k 0 is the wave vector in air and the propagation constant β is a function of ChG film thickness d. Some results for the propagation constant β are presented in Figure 6 for TE modes. In resonance conditions, all the energy of light is absorbed by oscillating electrons, which leads to zero intensity in the reflected light.To realize the resonance interaction, the light wave vector component parallel to the interface must be equal to the propagation constant of the SPP wave (k 0 n p sin θ) = β, and β/k 0 = N eff , where N eff is the effective refractive index of the propagating wave.The maximum value of N eff is obtained for θ = 90 • .We consider that the prism and substrate are made of the same material. The Au film thickness is 50 nm, and the ChG film thickness d may vary between 200 and 1600 nm.The refractive index of the ChG film was considered to be 2.45.The optical constants of Au were taken from the paper by Rakic [48], and the refractive index at 633 nm wavelength is = 0.19 − 3.25.The real part of the effective refractive index Neff = β/k0, where k0 is the wave vector in air and the propagation constant β is a function of ChG film thickness d. Some results for the propagation constant β are presented in Figure 6 for TE modes. (a) (b) In resonance conditions, all the energy of light is absorbed by oscillating electrons, which leads to zero intensity in the reflected light.To realize the resonance interaction, the light wave vector component parallel to the interface must be equal to the propagation constant of the SPP wave (k0 np sin θ) = β, and β/k0 = Neff, where Neff is the effective refractive index of the propagating wave.The maximum value of Neff is obtained for θ = 90°.We consider that the prism and substrate are made of the same material. The photon energy can be effectively coupled to SPP mode only when the Neff is in the range 1.0 ÷ 1.5.It means that the plasmonic waveguide mode can only be excited for certain film thicknesses and only one mode at a time.It is an interesting result that the four-layer SPR configuration, which contains a high refractive index film (like amorphous ChG glass), can be coupled with light by using a prism of a lower refractive index (like BK7 glass).The high refractive index gives better light confinement to the surface, meaning better sensitivity in the case of sensors. The electric and magnetic field distribution of the TM and TE modes can also be found (Figure 7) [20].For example, the whole electromagnetic field for the TE modes excited within the SPR structure can be derived by calculating one component of the electric The photon energy can be effectively coupled to SPP mode only when the N eff is in the range 1.0 ÷ 1.5.It means that the plasmonic waveguide mode can only be excited for certain film thicknesses and only one mode at a time.It is an interesting result that the four-layer SPR configuration, which contains a high refractive index film (like amorphous ChG glass), can be coupled with light by using a prism of a lower refractive index (like BK7 glass).The high refractive index gives better light confinement to the surface, meaning better sensitivity in the case of sensors. The electric and magnetic field distribution of the TM and TE modes can also be found (Figure 7) [20].For example, the whole electromagnetic field for the TE modes excited within the SPR structure can be derived by calculating one component of the electric field (e.g., E y , which is perpendicular to the plane of incidence) within every layer of the SPR structure.The one-dimensional propagation equation for E y within every layer of the SPR structure is given as [19,20]: where i = 1 ÷ 4k 0 is the vacuum wave vector; β, β = k 0 n 1 sin(θ) given by Equation ( 4), is the "axial" wavevector along the z axis; and n is the refractive index of the layer.field (e.g., , which is perpendicular to the plane of incidence) within every layer of the SPR structure.The one-dimensional propagation equation for within every layer of the SPR structure is given as [19,20]: where i = 1 ÷ 4 is the vacuum wave vector; , β = k n sin (θ) given by Equation ( 4), is the "axial" wavevector along the z axis; and n is the refractive index of the layer. The equation system (11) contains the equations for each medium of the SPR structure, which are bonded by continuity conditions.The general solution of these differential equations is a superposition of progressive and regressive waves along the x axis for every region of the SPR structure, except for the last medium (air), where the emerging wave is described as a progressive wave.The continuity equations for the magnetic and electric fields at the three interfaces of the waveguide structure are fulfilled if the magnetic field and its spatial derivative over are continuous.The continuity equations for the three interfaces are equivalent to six algebraic equations that enable calculation of the wave's amplitudes within the SPR layers, considering the amplitude of the incident wave and the propagation constant as input parameters.These equations are solved numerically in MATLAB with ordinary solvers, and the amplitude and power of the wave reflected at the glass-metal interface are calculated relative to the power of the incident wave.For the TE0 mode, the propagation of plasmonic waves is mainly through the center of the film, while for the TE1 mode, the energy propagates closer to the interface of the chalcogenide film with the adjacent media. Nonlinear Behavior of SPR Structures That Contain Amorphous As2S3 Thin Films Amorphous solid materials enable physical phenomena that are not observed in crystalline materials.The most important phenomenon for optoelectronics is the modification of the optical constants, either real or imaginary, under laser irradiation.Under low-in-Figure 7. Two-dimensional plot of the intensity distribution in the SPR structure with a ChG film (with 0.25 µm thickness) for the TE 0 (a) and TE 1 (b) modes at 633 nm wavelength.More intense red colors mean greater intensity of the field.The dark blue color means that the field has a near-zero value.For the TE 0 mode, the propagation of plasmonic waves is mainly through the center of the film, while for the TE 1 mode, the energy propagates closer to the interface of the chalcogenide film with the adjacent media. The equation system (11) contains the equations for each medium of the SPR structure, which are bonded by continuity conditions.The general solution of these differential equations is a superposition of progressive and regressive waves along the x axis for every region of the SPR structure, except for the last medium (air), where the emerging wave is described as a progressive wave.The continuity equations for the magnetic and electric fields at the three interfaces of the waveguide structure are fulfilled if the magnetic field and its spatial derivative over n 2 are continuous.The continuity equations for the three interfaces are equivalent to six algebraic equations that enable calculation of the wave's amplitudes within the SPR layers, considering the amplitude of the incident wave and the propagation constant as input parameters.These equations are solved numerically in MATLAB with ordinary solvers, and the amplitude and power of the wave reflected at the glass-metal interface are calculated relative to the power of the incident wave. Nonlinear Behavior of SPR Structures That Contain Amorphous As 2 S 3 Thin Films Amorphous solid materials enable physical phenomena that are not observed in crystalline materials.The most important phenomenon for optoelectronics is the modification of the optical constants, either real or imaginary, under laser irradiation.Under low-intensity laser illumination, the optical bandgap decreases in amorphous As 2 S 3 and As 2 Se 3 .Photodarkening in As 2 S 3 or As 2 Se 3 [49] is associated with structural changes.It has a linear dependence of the redshift of the absorption edge on intensity in the domain of 50 ÷ 200 mW/cm 2 .Photodarkening in amorphous As 2 S 3 or As 2 Se 3 can be improved by doping with rare earth elements, as shown in [50].The studies were carried out on As 2 Se 3 nanolayers [51] and demonstrated reliably the oxidation of arsenic during illumination.This process may be implicated in the long-term, hundreds-of-minutes modification of optical transmission in amorphous chalcogenides.More complex photoinduced changes also take place in the nanosecond up to the femtosecond domain [52] when ChGs exhibit transient absorption (TA) triggered by recombination of self-trapped excitons. The photoinduced changes in optical transmission were first observed experimentally by de Neuville, and many theoretical models were proposed for describing the experimental findings.For example, the models of Street [53], Tanaka [54], and Elliott [55] relate the experimental findings to the structural changes in the first coordination sphere.Other models suggest that changes in the electron energy spectrum are responsible for photoinduced optical phenomena [56].The photo-induced modifications of the refractive index of thin ChG films are below 0.01, while the absorption coefficient modification is about 10%.Such small modifications raise serious challenges in terms of reproducibility and technology for producing and storing the thin layers.More sensitive experimental methods have to be used for characterizations, as in [57], which presents a study on reversible photo-induced phenomena that occur within amorphous ChG material put in a structure that supports SPR resonance. In SPR structures, strong changes in the reflected signal can occur even for small changes in the refractive index in amorphous chalcogenide films due to resonance conditions.The authors of [58] demonstrated SPR light modulation using amorphous Ga-La-S films in SPR structures.A prism made of rutile TiO 2 monocrystal with a high refractive index of refraction was used.As the amorphous As 2 S 3 has an even higher refractive index (n = 2.45), it is difficult to select the right material for prism fabrication.However, the ChG films constitute a planar plasmonic waveguide that can maintain several modes.The effective refractive index N eff = β/k 0 of a waveguide depends on the As 2 S 3 film thickness and can vary from 1 (air refractive index) to 2.45 (amorphous film refractive index).For some thicknesses, the effective refractive index can be lower than the refractive index of BK7 glass, which means that for these thicknesses, the resonant coupling corresponding to the condition N eff = n p •sinθ can be realized. Software in the MATLAB, R2017a application was developed for SPR calculations by the matrix method.Calculations were carried out for a four-layer system.A wavelength of 514 nm was selected to maximize the interaction of light with the amorphous As 2 S 3 material.This should correspond to the condition αd ≈ 1.We will make the following estimate: for films with a thickness of d = 1 µm, the optical absorption coefficient α should be about 10 4 cm −1 .This corresponds to the selected wavelength.The structure is as follows: BK7(prism)-Au (50 nm metallic film)-As 2 S 3 (with different film thicknesses)-air.Calculations show that there is only one peak for a small thickness that corresponds to the basic plasmon interaction, while for thicknesses of 250 nm and greater, in addition to the plasmonic dip corresponding to the resonance angle of 65 • , there are also peaks corresponding to the guided modes.There are also resonant angles for the incident light, which has s polarization.The numerical simulations indicate that the resonance curves are sharper in the case of coupling with waveguide modes.The reflectivity may change from 0% to 100% only with a modification of the As 2 S 3 film refractive index of 1%.Such small modifications are known to result in an amorphous As 2 S 3 film under illumination of the order of 10 mW/cm 2 .The developed model was published in our paper [59].A detailed calculation of the nonlinear equation shows a hysteresis-type dependence on input power. Here we would like to present some experimental results only. The experimental schematic was like the one presented in Figure 2b.The four constituent regions have the following refractive indices at a wavelength of 514 nm: a semi-infinite glass of BK7 with the refractive index n 1 = 1.5205, representing the substrate and the coupling prism; a thin metallic layer Au with the complex refractive index [48] n 2 = 0.682-2.020i;a thin amorphous As 2 S 3 film with the complex refractive index n 3 = 2.852-0.019i,which was determined from ellipsometry studies; and a semi-infinite air cover region (n 4 = 1).The high refractive index of the amorphous As 2 S 3 layer forms a plasmonic planar waveguide.There is a sharp drop in the reflection coefficient in the reflectivity of light due to resonant coupling of the incident radiation with waveguide mode when the metallic film thickness is close to 50 nm (according to the calculations). The amorphous ChG As 2 S 3 film was obtained by thermal evaporation in a vacuum of 6.6 × 10 −4 Pa.The technological setup was improved to avoid droplet formation and a high roughness of the film surface.Small granules of As 2 S 3 obtained from bulk and shredded materials were placed in a tube made of molten quartz.This tube was heated from the outside by a coil made of nichrome, through which current passes.The radiation of the CW argon laser with a beam diameter of 1 mm was directed onto the plasmonic structure through the coupling prism made of BK7 glass.The polarization of the laser is in the plane of incidence, so the TM modes were excited.The power of the incident laser varied in the range of 1 to 20 mW. Figure 8 shows the results of the reflected laser power as a function of incident intensity.The data are very consistent with what the model predicted.At the initial stage, when the power of the incident light was low, the lowest reflection was obtained by setting the angle of incidence.The found angle of incidence was 41.13 • (Figure 8a).The angle of incidence was then changed by rotating the structure down to 40.8 • (near point 2 on the curve in Figure 8a).In this position, the dependence of the power output P out as a function of incident power was measured.The results are shown in Figure 8b.A large loop of nonlinear hysteresis was established for laser beam power in the range of 10 ÷ 11 mW.Hysteresis was not observed for initial angle settings greater than 41.13 • , near point 1. The reflectance is very sensitive to small changes in the ChG film's refractive index induced by CW argon laser radiation at 514 nm.In the four-layer configuration, an As 2 S 3 film with a high refractive index acts as a waveguide.Mode dispersion and self-induced changes of the optical constants lead to an optical hysteresis loop.The shape depends a lot on the initial angle of incidence. As a conclusion, it can be mentioned that the experiments with amorphous As 2 S 3 films in SPR configuration established a hysteresis loop in output power.It was demonstrated that the SPR structure with an amorphous As 2 S 3 film can be a promising medium for active plasmonic devices.Future studies can be carried out in order to obtain optical bistability.The established reflectance depends on laser intensity at a specific wavelength.For many ChG materials used for information recording, the required exposure for inducing optical modifications is of the order of 1 ÷ 2 J/cm 2 [60,61].Moreover, SPR structures containing a ChG layer that display reversible changes in the photoinduced optical axis enable the engineering of optical memory devices. high roughness of the film surface.Small granules of As2S3 obtained from bulk and shredded materials were placed in a tube made of molten quartz.This tube was heated from the outside by a coil made of nichrome, through which current passes.The radiation of the CW argon laser with a beam diameter of 1 mm was directed onto the plasmonic structure through the coupling prism made of BK7 glass.The polarization of the laser is in the plane of incidence, so the TM modes were excited.The power of the incident laser varied in the range of 1 to 20 mW. Figure 8 shows the results of the reflected laser power as a function of incident intensity.The data are very consistent with what the model predicted.At the initial stage, when the power of the incident light was low, the lowest reflection was obtained by setting the angle of incidence.The found angle of incidence was 41.13° (Figure 8a).The angle of incidence was then changed by rotating the structure down to 40.8° (near point 2 on the curve in Figure 8a).In this position, the dependence of the power output Pout as a function of incident power was measured.The results are shown in Figure 8b.A large loop of nonlinear hysteresis was established for laser beam power in the range of 10 ÷ 11 mW.Hysteresis was not observed for initial angle settings greater than 41.13°, near point 1. The reflectance is very sensitive to small changes in the ChG film's refractive index induced by CW argon laser radiation at 514 nm.In the four-layer configuration, an As2S3 film with a high refractive index acts as a waveguide.Mode dispersion and self-induced changes of the optical constants lead to an optical hysteresis loop.The shape depends a lot on the initial angle of incidence. As a conclusion, it can be mentioned that the experiments with amorphous As2S3 films in SPR configuration established a hysteresis loop in output power.It was demonstrated that the SPR structure with an amorphous As2S3 film can be a promising medium for active plasmonic devices.Future studies can be carried out in order to obtain optical SPR Optical Sensors in a Four-Layer Structure That Contains an Amorphous As 2 S 3 Film Optical sensors based on SPR in Kretschmann's three-layer configuration are an optically pumped and optically interrogated powerful sensing platform, especially for bio-sensing.The main advantage is the real-time measurements permitted to investigate the kinetics of biological reactions.Several instrumental giants like [62,63] entered the market with new high-performance devices after BIACORE, a Swedish company situated in Uppsala [64], was launched as one of the first startups.New startups continue to enter the market [65], which confirms that the commercial base is growing. A basic feature of a three-layer configuration is its simplicity and high stability because it only uses optical glass and gold film.As far as sensitivity is concerned, this structure has no room for maneuver, with the parameters depending on the material's optical constants.As shown at the beginning of this article, the four-layer structure has the advanced possibilities to lead with sensitivity (in particular, depth of field), but also as devices with active optical properties.An approach to structure with nonlinear optical properties was examined above.Next, we will unfold the possibilities of the four-layer structure, one of which is an amorphous ChG material film of type As 2 S 3 for chemical sensors and biosensors. The operation principle of optical sensors is based on the determination of the change in the refractive index.Optical sensors can determine a wide range of chemical compounds by adjusting the angle of incidence.This is simpler than fiber-optic sensors, which require [8,40,41,66] the scanning of the interrogation wavelength.SPR can be achieved at the interface between a metal film and a dielectric medium.SPR-based devices are known in various configurations [67,68].Sensors with SPR for hydrogen detection [69], methylene groups [70], hydrocarbons such as ethane and methane [71], nitrogen dioxide [72], etc., are known. SPR Sensors with As 2 S 3 for Alcohol Identification Alcohols (e.g., ethanol) are of great importance since they are used in various areas such as drinks, the food industry, fuel, the environment, security, etc. Alcohols differ in their refractive index, and small changes in the refractive index or the extinction coefficient may be recorded when the created structure is placed in a resonant structure.In some papers [73,74], the optical reflection change for ethanol sensing has been demonstrated in a SPR configuration using thin films of TiO 2 nanocrystals.The minimum concentration detected amounted to 780 ppm, leading to a 4% change in reflection.The coupling of light with the plasmonic wave was ensured in the Kretschmann configuration [75].A coupling prism is used to increase the angle of incidence.The resonance conditions are highly dependent on the refractive index of the neighboring medium, so very small changes result in a measurable shift in the resonant dip. In this section, concrete numerical calculations of SPR for some alcohols of practical importance are presented briefly.The sensor optimization was undertaken by adjusting the thickness of the ChG films.We presented the results in more detail in a previous paper [76].The structure is in contact with the liquid to be investigated, in our case, alcohol.The specific alcohols are disclosed by measuring the resonance incident angle.Three cases with different thicknesses were considered.The films made of amorphous arsenic sulfide have thicknesses of 800 nm, 1000 nm, and 1100 nm.The wavelength of interrogation was 1550 nm, and the gold film was of 40 nm thickness. The refractive index of rutile was considered to be between 2.45 (ordinary) and 2.70 (extraordinary).Five alcohols with known refractive indices (methanol, ethanol, propanol, butanol, and pentanol) were considered.Our calculations were performed to establish if alcohols can be distinguished by a four-layer SPR optical device.Using the known alcohol's refractive index, the resonance angle was calculated.The results are summarized in Table 4.The ratio ∆θ(R min )/∆n (variation of θ min with the refractive index) represents the selectivity power to the refractive index.The following parameters were obtained: 4.90 • /RIU for d = 800 nm; 13.30 • /RIU for d = 1000 nm; and 8.40 • /RIU for d = 1100 nm.Note: RIU means refractive index units. The reflectance curves for a few alcohols are shown in Figure 9.It can be seen that the shift of the dip and the shape of the curve make it possible to clearly identify alcohols. The four-layer SPR structure is very sensitive to changes in refractive indices.The sensitivity to changes can be modified by adjusting the film's thickness.The resolution of the refractive index depends on the measured precision of the angle determination and the stability of light.Less than 10 −4 refractive index changes can be distinguished in our case.The method can be applied not only for alcohol identification but also for the identification of other liquids.The working wavelength of 1550 nm corresponds to fiber-optic information networks, which is an advantage for devices.The four-layer SPR structure is very sensitive to changes in refractive indices.The sensitivity to changes can be modified by adjusting the film's thickness.The resolution of the refractive index depends on the measured precision of the angle determination and the stability of light.Less than 10 −4 refractive index changes can be distinguished in our case.The method can be applied not only for alcohol identification but also for the identification of other liquids.The working wavelength of 1550 nm corresponds to fiber-optic information networks, which is an advantage for devices.In the previous section, we presented the calculated characteristics of the four-layer SPR structure for alcohol identification.We published similar analyses for hydrocarbon identification in a previous paper [77].Further, we will present the use of a four-layer (SPR) structure as a sensor able to detect pathogen strains such as E. coli bacteria.More details are presented in our paper [78]. In order to avoid working with bacteria that might be unsafe, the aim was to detect the marker-an enzyme produced by the alive bacteria.The marker is named β-galactosidase.The sensor is presented schematically in Figure 10. SPR Bio-Sensor with an As2S3 Film for E. coli Bacteria Detection In the previous section, we presented the calculated characteristics of the four-layer SPR structure for alcohol identification.We published similar analyses for hydrocarbon identification in a previous paper [77].Further, we will present the use of a four-layer (SPR) structure as a sensor able to detect pathogen strains such as E. coli bacteria.More details are presented in our paper [78]. In order to avoid working with bacteria that might be unsafe, the aim was to detect the marker-an enzyme produced by the alive bacteria.The marker is named β-galactosidase.The sensor is presented schematically in Figure 10.It consists of a cell that is pressed onto the plasmonic chipset.The chipset contains gold and amorphous As2S3 films, usually obtained by vacuum deposition on a glass-plane substrate that is backside-glued to the prism's base.The liquids for characterization (which can be different chemicals) flow into the cell.The measurements consist of a resonance angle determination.The enzyme was procured from Sigma Aldrich Chemicals Pvt. Ltd., Darmstadt, Germany. The solution refractive index was found for five concentrations (0%, 0.05%, 0.1%, 0.5%, and 1%) of the solution.The refractive index of the solution was determined as follows: it was considered that the refractive index of the solution changes linearly from that of water nw (0%) to that of the pure enzyme ng.Respectively, the following relationship It consists of a cell that is pressed onto the plasmonic chipset.The chipset contains gold and amorphous As 2 S 3 films, usually obtained by vacuum deposition on a glass-plane substrate that is backside-glued to the prism's base.The liquids for characterization (which can be different chemicals) flow into the cell.The measurements consist of a resonance angle determination.The enzyme was procured from Sigma Aldrich Chemicals Pvt. Ltd., Darmstadt, Germany. The solution refractive index was found for five concentrations (0%, 0.05%, 0.1%, 0.5%, and 1%) of the solution.The refractive index of the solution was determined as follows: it was considered that the refractive index of the solution changes linearly from that of water n w (0%) to that of the pure enzyme n g .Respectively, the following relationship can be written: Here, unlike in previous situations, in the calculations it was considered that the refractive index of the ambient environment is that of the solution, not that of the air, which has always been considered equal to one.The structure parameters are presented in Table 5.The transfer matrix formalism was used to calculate the reflectance characterizing the plasmonic structure with the parameters presented in Table 5.The results for the reflectance values are presented in Figure 11 for TM polarization.The SPR resonance angle was measured with an accuracy of 1%.The established half-width of the resonance curve was 0.25 The transfer matrix formalism was used to calculate the reflectance characterizing the plasmonic structure with the parameters presented in Table 5.The results for the reflectance values are presented in Figure 11 for TM polarization.The SPR resonance angle was measured with an accuracy of 1%.The established half-width of the resonance curve was 0.25°.The results presented in Figure 12 show a quasi-linear dependency of the resonance angle on the enzyme concentration.The results presented in Figure 12 show a quasi-linear dependency of the resonance angle on the enzyme concentration.The results presented in Figure 12 show a quasi-linear dependency of the resonance angle on the enzyme concentration.Research has shown that a waveguide SPR structure made of high-refractive-index materials, like amorphous chalcogenide, has good sensitivity and can be used to identify the presence of E. coli bacteria by determining the concentration of a marker enzyme in aqueous solution.This is determined by measuring the angle of resonance.The solution Research has shown that a waveguide SPR structure made of high-refractive-index materials, like amorphous chalcogenide, has good sensitivity and can be used to identify the presence of E. coli bacteria by determining the concentration of a marker enzyme in aqueous solution.This is determined by measuring the angle of resonance.The solution with a certain concentration "resonates" at a specific incidence beam angle, which can be measured. Discussion The SPR concept currently underpins many optical environmental sensors.Several tool platforms are currently traded around the world.Their operation is based on the Kretschmann configuration of coupling light with surface plasmon waves.This interaction is manifested by a very narrow resonance curve, whose position depends on the refractive index of the surrounding environment.In this configuration, which is a three-layer one, the position and width of the resonance curve are fixed and are defined by the optical constants (n, k) of the metal film and the refractive index of the coupling prism.Of course, there are small changes related to dispersion, but they are insignificant.However, the resonance characteristics can be considerably altered if the SPR structure also includes a film consisting of an optically transparent material.Thus, a four-layer SPR structure is built.The transparent film with a high refractive index can settle on the substrate over the metal film.The opposite side of the substrate is attached to the prism by an immersion oil, which must be made of a material with optical properties close to the substrate.The prism can be used with several chipsets, which wear out after several applications.As a dielectric film, it can serve amorphous ChG materials that also exhibit semiconductor qualities, but for photon energies under the forbidden band, they have good optical transparency. The four-layer structure acts as a planar waveguide, but the metal film, which is indispensable for achieving the SPR interaction, introduces significant optical losses.Minimizing them is achieved by selecting the thickness, which should be around 50 nm for gold film.The dispersion of gold optical constants is such that for the near-IR spectral domain, thicknesses of 40 ÷ 45 nm are more optimal.The calculations are based on the transfer matrix method.The multilayer structure is examined as a multiplication of matrices, so calculations can be made for a structure with an arbitrary number of layers.Calculations have shown that the resonance curves are quite sharp. The increased refractive index of the film is essential to achieving better field confinement near the surface.In addition to the specific situation related to odd-even modes, the necessary degree of field confinement is achieved by adjusting the positioning of that plasmonic waveguide mode related to the "mode cut-off".This is ensured by selecting the thickness of the film.Amorphous ChG materials are considered suitable materials for optical waveguide development.Films with the composition As 2 S 3 and As 2 Se 3 n are considered as reference materials.They have a high refractive index (2.45÷ 3.00), low optical losses in the transparency band, and can be easily obtained on large metal or dielectric surfaces by vacuum deposition techniques.As can be seen from our reference papers, an optical absorption of 100 cm −1 leads to a considerable widening of the resonance contour and, consequently, to a decrease in the device's performance.Note that 100 cm −1 corresponds to optical absorption of only 1% in a film with a thickness of 1 µm. SPR in a four-layer configuration can always be achieved if a prism with a high refractive index is used, higher than that of the ChG film.The angle at the base of the prism must be chosen to be equal to the calculated incident resonance angle.In this case, the laser beam will be directed normally on the side face of the prism.The exact angle of plasmon resonance was provided with small adjustments to the angle of incidence. Simulations show that the four-layer SPR configuration allows plasmonic resonance to be obtained for both p polarization (TM mode) and s polarization (TE mode).The coupling prism can be made from GaP, which is an anisotropic material with a refractive index (2.67 and 2.54 in the VIS domain) higher than that of the ChG film with amorphous As 2 S 3 used as a waveguide.The incident light is coupled with a certain plasmonic waveguide mode through the evanescent field that penetrates the gold film, which is a semi-transparent one with a thickness in the range of 40-50 nm.With such a prism, SPR is obtained in a continuous range of film thicknesses in As 2 S 3 .In terms of incident light polarization, it can be found that R p minimums are lower and very close to zero compared to R s minimums.The best gold film thickness required to achieve the clearest resonance in the near-IR band was 40 nm, which differs from the optimal 50 nm thickness known for the visible spectral range.At a wavelength of 1310 nm, an absolute minimum R p of 0.004% was obtained for a 300 nm As 2 S 3 layer, while an absolute minimum R s of 1.43% was obtained for a 500 nm As 2 S 3 layer.At a wavelength of 1550 nm, the minimums are higher: the minimum R p is 0.66% for a 1000 nm As 2 S 3 layer and 1.15% for a 300 nm As 2 S 3 layer, and the minimum R s is 5.93% for a 1000 nm As 2 S 3 layer and 10.72% for a 500 nm As 2 S 3 layer. A topical issue is the need to use high-quality, low-cost boro-silicate glass materials (such as BK7) for coupling prisms.It is known that these materials have a refractive index in the range 1.5 ÷ 1.7, much lower than the refractive index of ChG materials used to make the plasmonic waveguide.This may seem impossible at first glance.However, films made of ChG materials can withstand planar guide modes with an effective refractive index of less than 1.51, which is the BK7 refractive index in the visible domain.They can be excited by prisms made of material with this refractive index or higher.From the calculations, it was established that SPR can be provided in this case, but only for ChG films, the thickness of which is within a certain range. Photoinduced changes in optical constants in amorphous ChG are a well-known phenomenon.The value of these changes in ChG thin films is of the order of 10 −2 .Such small changes pose serious challenges in terms of reproducibility and technology.In our work, a model for self-induced nonlinear changes in refractive index in As 2 S 3 has been developed.The self-induced reflectance change in the SPR resonance structure with As 2 S 3 films was studied experimentally. The model has been confirmed experimentally.To begin with, the resonance value for low light intensities was determined, which was 41.13 • .Then, a small detuning of the angle from the resonant position on the left branch was made (see point 2 in Figure 8a).A hysteresis loop was obtained by increasing and decreasing the intensity of light (Figure 8b).No hysteresis loop was observed when the initial detuning was at the right branch (see point 1 in Figure 8a) of the resonance curve.Research provided has demonstrated the possibility of realizing bistable optical devices of low intensity in SPR structures with amorphous ChG films. Conclusions Four-layer structures containing amorphous ChG materials open up new possibilities in terms of manipulation with the degree of confinement, sensitivity to refractive index changes, and depth of field.Our research has shown the ability of the SPR structure to identify alcohols or liquid hydrocarbons.The ability of SPR biosensors with amorphous As 2 S 3 film to detect E. coli bacteria has been experimentally demonstrated.The method used was to detect the concentration of the marker, the marker being an enzyme produced by bacteria while alive, namely β-galactosidase.The four-layer configuration with an amorphous As 2 S 3 film demonstrated the high sensitivity of the method.A 0.05% change in concentration causes an 18 s arcsecond change in the plasmon resonance angle, which can be resolved.The four-layer SPR configuration offers the possibility of developing new nonlinear devices and advanced optical sensors. Our experiments with amorphous As 2 S 3 films in the SPR configuration established a hysteresis loop in output power so that they can be promising media for active plasmonic devices.Future studies can be undertaken in order to obtain optical bistability.SPR structures containing a ChG that display reversible changes of the photoinduced optical axis enable the engineering of optical memory devices. (a) Only TM polarization can trigger SPP waves.For this TM polarization, the electric field triggers the oscillation of free electrons due to the electric field component perpendicular to the dielectric-metal interface.(b) The real permittivity of the metal (ε 1r ) and dielectric (ε 2 ) are of the opposite sign.They must satisfy the condition: Re {ε 1r } < −ε 2 . Figure 1 . Figure 1.Three-layer SPR configuration (a) and the resonance curve obtained experimentally with an Au thin film of 50 nm thickness (b). Figure 1 . Figure 1.Three-layer SPR configuration (a) and the resonance curve obtained experimentally with an Au thin film of 50 nm thickness (b). Figure 3 . Figure 3.A 3D representation of the reflection coefficient of p-polarized light for SPR structures with 40 nm thick Au films and 500 nm thick As2S3 films.More intense red colors mean higher field intensity.The blue color means that the value of the field is close to zero.For a given wavelength λ, multiple resonant angles θ are possible due to the high refractive index of the GaP prism.The resonance angle corresponds to the dip in reflectivity.As shown in the figure, resonant angles can be in a wide range, from 20° to 60°.The resonance curve is sharper for angles close to 20°, and a narrower resonance means a higher quality factor. Figure 3 . Figure 3.A 3D representation of the reflection coefficient of p-polarized light for SPR structures with 40 nm thick Au films and 500 nm thick As 2 S 3 films.More intense red colors mean higher field intensity. Figure 4 . Figure 4. Reflectance as a function of incidence angle for s polarizations at the wavelength of 1310 nm.The SPR structure has an Au film (40 nm thickness) and an As2S3 film with different thicknesses in nm.For a thickness of 300 nm, only one resonance at 33° occurs.For larger thicknesses, two sharp resonances can be observed, meaning that higher sensor sensitivities can be obtained. Figure 4 . Figure 4. Reflectance as a function of incidence angle θ for s polarizations at the wavelength of 1310 nm.The SPR structure has an Au film (40 nm thickness) and an As 2 S 3 film with different thicknesses in nm.For a thickness of 300 nm, only one resonance at 33 • occurs.For larger thicknesses, two sharp resonances can be observed, meaning that higher sensor sensitivities can be obtained. Figure 5 .Figure 5 . Figure 5.The reflectances Rp and Rs as a function of incidence angle for two polarizations (p and s) at a wavelength of 1550 nm: (a) for the SPR structure with Au film (40 nm thickness) and As2S3 films with different thicknesses in nm; (b) for the SPR structure with Au film (40 nm thickness) Figure 5.The reflectances R p and R s as a function of incidence angle θ for two polarizations (p and s) at a wavelength of 1550 nm: (a) R p for the SPR structure with Au film (40 nm thickness) and As 2 S 3 films with different thicknesses in nm; (b) R s for the SPR structure with Au film (40 nm thickness) and As 2 S 3 films with different thicknesses in nm.For the same film thickness, the values of the resonance angles corresponding to figure (b) are lower for s polarization.The reflections for the resonance angle are closer to zero for p polarization, while for s polarization they are not.At large thicknesses (700 nm and 1000 nm), two resonance angles corresponding to different waveguide modes can be observed. Figure 6 . Figure 6.Real part (a) and imaginary part (b) of the propagation constant for the TE modes. Figure 6 . Figure 6.Real part (a) and imaginary part (b) of the propagation constant for the TE modes. Figure 7 . Figure 7. Two-dimensional plot of the intensity distribution in the SPR structure with a ChG film (with 0.25 μm thickness) for the TE0 (a) and TE1 (b) modes at 633 nm wavelength.More intense red colors mean greater intensity of the field.The dark blue color means that the field has a near-zero value.For the TE0 mode, the propagation of plasmonic waves is mainly through the center of the film, while for the TE1 mode, the energy propagates closer to the interface of the chalcogenide film with the adjacent media. Figure 8 . Figure 8. Hysteresis type of the reflected output power in the SPR structure with As2S3 film.(a) Experimental output power recorded at a near-resonance angle of 41.13° at a low incident power of 2.8 mW.(b) Output power vs. incident power for an incidence angle of 40.8°.The up arrow on the black curve indicates increasing power, while the down arrow on the red curve indicates decreasing power.Nonlinear hysteresis was established for a laser beam power of 10 ÷ 11 mW if the initial angle was adjusted at values lower than the resonance dip (near point 2).Points 1 and 2 in figure (a) are indicative, only to differentiate the different behavior of the system for angles higher or lower than the resonance angle equal to 41.13°. Figure 8 . Figure 8. Hysteresis type of the reflected output power in the SPR structure with As 2 S 3 film.(a) Experimental output power recorded at a near-resonance angle of 41.13 • at a low incident power of 2.8 mW.(b) Output power vs. incident power for an incidence angle of 40.8 • .The up arrow on the black curve indicates increasing power, while the down arrow on the red curve indicates decreasing power.Nonlinear hysteresis was established for a laser beam power of 10 ÷ 11 mW if the initial angle was adjusted at values lower than the resonance dip (near point 2).Points 1 and 2 in figure (a) are indicative, only to differentiate the different behavior of the system for angles higher or lower than the resonance angle equal to 41.13 • . Figure 9 . Figure 9. Reflectance of light obtained for several alcohols. Figure 9 . Figure 9. Reflectance of light obtained for several alcohols. 7. 2 . SPR Bio-Sensor with an As 2 S 3 Film for E. coli Bacteria Detection Figure 10 . Figure 10.Schematic of the SPR sensor. Figure 10 . Figure 10.Schematic of the SPR sensor. Table 1 . Optical constants of some usual metals used in SPR experiments. 40 nm Au Film 45 nm Au Film 50 nm Au Film Materials 2023, 16, x FOR PEER REVIEW 9 of 22 Table 3 . Minima in % for Rp and Rs polarizations at a wavelength of 1550 nm. Table 3 . Minima in % for R p and R s polarizations at a wavelength of 1550 nm. Table 4 . Angle of resonance for different alcohols. Table 5 . Parameters of the SPR structure for enzyme identification.
15,521.8
2023-09-01T00:00:00.000
[ "Physics" ]
A Systematic Review on Non-Performing Assets in Banks in India Lending is a cruttial part of financial sector that is Banks/NBFCs in India. It is main revenue génération business of Bank/NBFCs. Financial Institution i.e. Bank and NBFCs used to borrow funds from the market i.e. from other institution & public and then lend the same again to its clients to gain profits to its owners/investors. There were 27 Public Sector Banks in India (Incl. SBI Associates Banks) before announcement of merger of some Banks by Union Govt. Of India in the year 2019 and there are multiple other Pvt. Sector Banks and NBFCs, co-operative bank and regional rurul bank which we studied in this paper. Lending business of the Banks/NBFCs is facing slowdown in recent years. Non-Performing Assets are increasing day by day which is creating big problem not only to financial sector i.e. Bank/NBFCs but also for other industries. In this paper we will systemtically review the literature/artiles already pubilshed on NPAs in India and to know the main reasons and factor which are resposible for rising NPA in financial institutions and to find out scope of further research on this topic. Introduction NPA: We can describe NPA as Non-Performing Assets, stressed Assets & Loans. An asset/Loan becomes NPA when it stops to get revenue for the organization. History of banks has been started from the Bank of Hindustan in India which has been established in the year 1770. SBI is the oldest bank from the live banks of India, it was form as Bank of Kolkata in 1806, and name has been changed to Imperial Bank of India in the year 1921, name again got changed to State Bank of India (SBI) in the year 1955. Non-Banking Financial Companies (NBFCs) has been started incorporated after independence, & in 1960s and then14 Banks were first of all nationalized in the year 1969 and 6 more banks nationalized in the year1980. Then in the year 1991 a nationalized bank i.e. New Bank of India got merged with PNB and total number of nationalized bank dropped to 19 from existing 20 in the year 1991. As of now government of India is in the planning of merger of banks and they have announced list of some banks which needs to be merger with each other and total number of nationalized bank to be reduced to 12 including SBI & its associate banks has already been merged with SBI. As of now a lot of Public Sector Banks, Pvt. Banks and NBFC are working in India, which have primary objective to earn profits through lending business. Financial Institution i.e. Banks and NBFCs deals in multiple type of loans, These are Home Loan, Loan against property, Auto Loan, Business Loan, Personal Loan, Gold Loan, MSME Loans and Agricultural loan and other working Capital Finance Limits. All the loans have their different repayment terms, which start from 1 year to 30 years, and working capital limits which used to renew annually. Both interest and principal to be repay by client as per repayment terms in the form of EMIs, those loans on which repayment cannot come within certain period the bank, categorized as Non-Performing Assets. These kind of assets/loans which/ are Non-performing rising day by day in the banks/NBFCs in India and that impacting the profitability of the institutions and raising problems to these institution as well as to the economy of the country. Objectives • To systematically review the literature/articles already published on Non-Performing Assets in India • To find out reasons and scope of further research on this topic • To know the major determinant of NPA in banking industry as NPA of financial institution in India is rising very fast which is creating issue for banks and Govt. and public of India. Literature Review In this paper we have done review of 2 Data Houses i.e. Taylor and Frances and Web of Sciences. We found and reviewed total 362 papers from these 2 Data Houses. Then we have exclude and include papers bases some variables and we have exclude the 347 papers and finally we found that there is 15 papers written on this topic Exclusion and Inclusion criteria/parameters mentioned below in details: Sr. No. A. Goyal, A. Sharmastudied the causes of credit and of NPAs with the use of a organization and panel of a organization with data till the year 2015 to check bank lending in comparison the total demand channel as an explanation for lower the credit growth in India. The study is about to answer the question that is the slow demand as key issue or credit lending policy is the reasons for rising NPA and to study the reasons for slow growth rate in India. This study shows total demand is more dominant than bank's credit policy for lending. Both the panel that is bank panel and firm level panel show that the main cause for slowdown in credit growth is low demand not the high Non-Performing Assets. Global demand was also affect advances of banks as well as NPAs. Total demand has been 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 Percentage of paper found Total reduced since the year 2011 due to fight with inflation. This have been enhanced NPAs & the reason to slow credit growth. The key structural changes in the positive direction needed to revive credit growth and demand in India. D.Gaur, D R Mohapatra -Compared Non Performing Assets in the two sectors i.e. priority & non-priority in India. Study has been done on Secondary data of Non-performing assets on these two sectors i.e. priority and nonpriority of banks in India & data has considered from the year 2012 to year 2017; It is observed that CAGR in NPA in priority sector have been risen at a lower rate in public sector banks in comparison to private banks and if we compare NPA growth in priority and non-priority sector then it is low in priority sector and high in non-priority sector. Situation in opposite in credit growth it is high in Priority sector for private & public sector banks both rather than non-priority sector. Study concludes bases regression analysis the difference is significant in NPAs in priority sector w.r.t. private & public sector banks. NPAs under priority sectors shown high relation with total advances in private sector banks. It conclude that more loans have been gone bad which are given by private banks to priority sector, against loans given by public sector banks. So we can say that, PSB have managed loan portfolio better of priority sector. Study also conclude that NPAs are higher in priority sector in comparison to average combined NPAs under priority and non-priority sector both. Hence, we can conclude that NPA contributed by the No-priority sector is lower than what is in priority sector, but priority sector is very crucial for country's economic growth & it cannot deny. L Findings shows that, the correlation of average WCG score is significant with net NPA ratio in PSB and it is also significant for value of mean WCG score on net Non Performing Assets in PSBs. Both correction & regression were opposite in private sector banks which is insignificant. But another finding is that correlation & regression both were found positive in all cases, which shows that Corporate Governance was being improving from last 6 years but it was not able to reduce NPA neither in private nor in private sector banks in India M. Kuar, R. Kumar -Examine the NPAs in Priority Sector & Non Priority industry in the period of pre crisis and after crisis and to study relationship between Private & Public Sector Banks. To find significant difference during period of pre & post crisis between NPAs of Private & PSBs using statistical tools to data analysis like t-test & growth rate; Study discover that NPAs in priority sector was significantly higher in both private & PSB during period of pre-crisis but growth rate in NPA registered negative in both private & PSB in the period of post crisis. Growth rate of NPAs has been high in Pvt. Sector Bank in priority sector against PSBs. Analysis also discovered that NPAs in non-priority sector was reducing in PSB during pre-crisis period but Pvt. sector Banks was shown upward trend. Study concluded that overall effect of crisis observed significantly in 7 PSBs where NPAs of banks were significantly low in the period of pre-crisis but it has been increased significantly in the period of post crisis. M. Kuar, R. Kumar -To study the framework of NPAs & effects of specific variable of bank on Non Performing Assets Sample of Ten banks has been taken from PSB basis size & sample taken by quartile deviation for period from 2001 to 2013. Data analyzed with panel data & used secondary data which have been collected from the many reports published by RBI; It shown, the specific factor of bank & determinant at macro level both affects the gross NPA of PSBs. In PSBs, independent variables like ROE, ROA, CDR, ROI and CAR have shown significantly negative impact on dependent variable that is gross NPA. It shows that if a bank has sound capital & strong margin that will reduce the NPAs. Factors at Macro Level have also proved the effect on bank's NPAs significantly. The result also shown that if there is rise in the supply of domestic currency, then Non Performing Assets of banking organization expected to go up. Hike in non employment in the country leads to negative impact income of persons who increases their obligation. Therefore tightening of loan rates increases problems for borrowers to repayment of loans, mainly to whom, who has borrowed the loans at variable rates. Highest WALR leads to lower NPAs & vice versa. N.Arora,NGrover,K. Kanwar -Study is to check the hypothesis that Non-performing Assets have come at alarming condition in commercial banking in India where they have started impacting the technical efficiency negatively or not; It has been observed that the effect of NPA is significant on entire technical efficiency & its different components. The main source of poor efficiency is gap in technology that is structure, purpose of banking Senior management of Pvt. & foreign banks are comparatively more professional, expertise & core competent rather than PSBs. Hence they are comparatively more competent in preparing plans and policies for the recovery of funds from the borrowers. PSBs need to advance loans to weaker section as per government instruction and the chances of recovery is very less there. NPAs of PSBs are in reducing trend but still more than Pvt. & foreign banks. As role of CIBIL SARFAESI has been increased which helps banks to fast recovery of debts. S.Ghosh-Study the relationship of corporate leverage & banks' NPA on data on manufacturing industry in India from the year 1993 to the year 2004; Findings showed that leverage is a crucial determinant of NPAs of banks. It is observed 10 percent points up in company's leverage impacted a 1.3 Percent point up in bad loan related to loans.. An increase in cost of capital has 2 effects on NPAs: One is that, it motivate the banks to advance more, which decreases the NPA ratio due to increase in denominator. Second impact is that, it also increases borrowing costs of the borrower, which contribute to increase in the numerator, hence increase NPA ratio. The net effect shows that the second aspect dominates the former. Hence study suggest that the high leverage of banks is shown a negative impact on quality of assets. Study also showed that bad loans rises with increase in corporate leverage, higher the leverage leads to fail more. We can also say that there is a impact of the leverage on bank's quality of the assets. S. Vardhan,R.Sharmam,V.Mukherji -Investigate function of specific determinants banks on Non Performing Assets in banking system of India, from the year 1995 to years 2010. Using CRAR & credit growth as alternate variables which is dependent and bank-specific variables as independent; Investigates significant effect of CRAR on Non-Performing Assets, which is in range of 10 to 12%, and after that level CRAR leads to decrease in Nonperforming Assets. This finding also resembles observation by And the skills of the employees should be enhance so that they can take better decision while financial assistance to the clients. Also observed that NPA are increasing with high pace in PSB than Private Banks. The study shows that lending principles were not property adhered by both PSB & Pvt Banks which leads to enhance the bad loans. Suggests it is required to establish some responsibility at the top management level which also includes Board of Directors of institutes at least in high value loans to ensure proper adherence of principles of lending. There is also scope in Corporate Governance to improve for better Credit & operational decision S. Sharma,D D Rathore, J Prasad -Attempt has made to make the comparison of selected PSB & Pvt Banks regarding increase in NPAs in this paper. The research bases Fresh Data which were gathered by "Questionnaire Method'' from the bank staff. Theoretical Framework An asset/Loan become NPA when stops to earn revenue to the bank Kind of NPA/Stressed Assets i. Standard Assets Which assets are not classified NPA are called Standard. ii. Sub-Standard Assets When a loan remains NPA for less than 12 months, categorized Sub Standard. Bank has to maintain reserves of 15% in these assets. iii. Doubtful Debts When an asset remains NPA for exceeding 1 year. iv. Loss Assets Where asset has identified as loss by the auditor of the bank Reasons of NPAs Willful Defaults: When a borrower is not paying its debts to lender however borrower is able to pay their due obligation. Misuse of funds: Some time borrower takes the loan for some other purpose and actually use the same for some other purpose from where they cannot earn sufficient funds to repay loans, like they takes funds for short run and utilize the same for long run, so that they cannot earn sufficient funds to repay short term obligation. Industrial Crisis: It is an external reason which affects NPAs of the banks in the country. Sometime industry faced some issue in performance like real estate has faced issue in last 5 years, and real estate funding of the banks got in default. Lenient Lending Norms: Leniency of the banks is also a major reason in increasing the number of NPAs. Aggressive funding for growth and due to cut throat competition in the banking industry are also the reasons for it. Kind of Loans Secured Loans These loans which are funded against some assets of borrow which is called collateral assets, means backed by or secured by any assets of borrow, so if borrow fails to pay or loan gets NPA then bank can sell the asset and recover the loan. Unsecured Loans When Conclusion We found that there are only 15 relevant articles out of total 362 articles about NPA, Most of the studies found about the other countries, hence there is a scope of study of NPA in India Some of the study worked on comparison of scenario of Non Performing Assets of Priority Sector and Non Priority Sector in India and find out that growth rate in NPA of Priority sector is less than of Non priority sector although growth in Priority sector loans is higher that Non Priority Sector in India. Study has also done for comparison of NPA status of Public and Pvt Sector in India which shows that growth rate of NPA in Public Sector Banks in India is more that the growth rate of NPA in Private Sector Banks. Study has also conducted on co-relation between corporate leverage with chances of default/NPA, there is positive co-relation between the both when there is high leverage it will lead of more default or chances to fall the loan under NPA Study also revealed that lending parameter has not been followed properly while appraising loans, that is also the main reasons of default/NPA Most of the studies has been speak about the comparison of Public sector and pvt sector banks, co-operative banks, means kind of financial institutions, and some of the studies also study about the NPA scenario of kind of loans like Secured/Unsecured) but nobody or very less researcher has study that under which program (Eligibility method) there is high NPAs and under which eligibility program there is low NPA growth rate, so there is scope to research on the that topic that how is institutions portfolio in separate kind of loans and how it behaved in separate eligibility program Most of the study has been done for period of 5 years and most of study done till 2015 and there is a scope to do comparative study lending business of Banks/NBFC in India with respect to NPA for last decade i.e. from 2009 to 2019 which will cover 1 tenor of UPA2 (i.e. from 2009 to 2014) having PM Dr. Manmohan Singh and 1 tenor of NDA1 (i.e. from 2014 to 2019) having PM Mr. Narendra Modi. Last decade is period of 10 years post global recession of 2008 and there is also scope of comparison of the NPA scenario in India in the previous decade i.e. 1998 to 2008 which is just previous decade of the global recession 2008. So here we can make comparison of latest 2 decades which will cover pre-recession and post-recession period and also compare the 2 period of the both central Govt. i.e. UPA and NDA We also observed that max study have been done to compare of PSB & Pvt. Banks in terms of NPAs & covers the 27% of the total study & one another sector where equal studies has been done is Macroeconomic & micro economic determinants/factors of NPA which also covers 27% of the total papers published on NPA. Third largest sectors on NPA is comparison of NPA in Priority & Non-Priority sector, which covers the 13% of total studies. Rest all 5 sectors contributed equally each which covers 7% of studies. The sectors on which least studies have been done are-Comparison between aggregate portfolio of banks with NPA, corporate governance with NPA, Leverage of borrower or banks with NPA, restructure of loan with NPA and Credit & Risk management with NPA in Bank and financial institutions. Hence there is a score for further studies in these sectors where as of now least studies has been done like, Like Credit & Risk Management Policy of Bank, determinants of NPA, Leverage of borrower a determinants of NPA, Corporate Governance in Banks with NPA. Some un touch sectors also observed are Trends of NPA & measure determinants of NPA in NBFCs in India, Trends of NPAs in Regional Rural Bank, Trends & determinants of NPA in co-operative banks in India, a comparative study of NPAs in India in PSB, Pvt Sector Bank, NBFCs, RRB, Co-operative Banks for latest decade and their measure determinants 6. Limitations There are following limitation of the study: 1. We have taken 2 Data Houses for review i.e. (1) Web of Science and (2) Tayler and Frances 2. Total Articles found & reviewed in these data houses are only 362 3. We have reviewed on NPA in India only & excluded the scenario in rest of the world
4,523.4
2021-04-10T00:00:00.000
[ "Business", "Economics" ]
Directionality in protein fold prediction Background Ever since the ground-breaking work of Anfinsen et al. in which a denatured protein was found to refold to its native state, it has been frequently stated by the protein fold prediction community that all the information required for protein folding lies in the amino acid sequence. Recent in vitro experiments and in silico computational studies, however, have shown that cotranslation may affect the folding pathway of some proteins, especially those of ancient folds. In this paper aspects of cotranslational folding have been incorporated into a protein structure prediction algorithm by adapting the Rosetta program to fold proteins as the nascent chain elongates. This makes it possible to conduct a pairwise comparison of folding accuracy, by comparing folds created sequentially from each end of the protein. Results A single main result emerged: in 94% of proteins analyzed, following the sense of translation, from N-terminus to C-terminus, produced better predictions than following the reverse sense of translation, from the C-terminus to N-terminus. Two secondary results emerged. First, this superiority of N-terminus to C-terminus folding was more marked for proteins showing stronger evidence of cotranslation and second, an algorithm following the sense of translation produced predictions comparable to, and occasionally better than, Rosetta. Conclusions There is a directionality effect in protein fold prediction. At present, prediction methods appear to be too noisy to take advantage of this effect; as techniques refine, it may be possible to draw benefit from a sequential approach to protein fold prediction. Background The purpose of this paper is to investigate whether directionality of synthesis can have an impact on the accuracy of protein structure prediction. In order to do this a sequential structure prediction algorithm, based on the most successful free modelling method of our time, Rosetta, was developed and used to predict structure, first starting from the nitrogen terminus and then starting from the carbon terminus. Free modelling protein structure prediction methodology has improved in recent years, but is still not accurate enough to be considered satisfactory (see results of CASP6 [1] and CASP7 [2,3] and the more recent CASP8 [4]). Given this noisy nature of current free modelling stucture prediction techniques, the pairwise comparison design used here appears to be required; it succeeded in detecting a consistent directionality effect. We begin, however, by summarizing the area. Almost fifty years ago Anfinsen et al. [5,6] showed that denatured small globular proteins could refold to their native state. On the other hand, experimentalists have known for many years that cotranslation can play an important role in protein folding [7][8][9][10][11][12]. Polypeptides are synthesized sequentially, and translation can occur at variable rates according to codon speed [13][14][15][16][17]. In Escherichia coli, for example, translation can occur in the order of 0.05 s/codon [13,[18][19][20]. On the other hand, it has been shown that helices and sheets fold in the low millisecond scale [21][22][23]. Therefore, some proteins fold faster than they elongate, and it is reasonable to assume that nascent chains can adopt secondary or tertiary structures cotranslationally. Experimental evidence for cotranslational folding dates back to the 1960s with a study on cotranslation in vivo reporting that ribosomebound β-galactosidase was showing enzymic activity [24]. More recently it has been shown that the Semliki Forest Virus Protein (SFVP), which contains a protease domain that folds to autocatalytically cleave the protein from a larger polyprotein precursor, gains its enzymic activity before complete synthesis of the polyprotein [25]. Moreover, the rapid cotranslational folding of SFVP does not require additional cellular components [26]. In addition to enzymatic activity whilst still bound to the ribosome, intermediate stages of cotranslational folding may have native-like structures. Various length αglobins have been shown to have specific heme binding activity on several truncated ribosome-bound nascent chains. The shortest of these contained only the first 86 residues (from a total of 147 residues), demonstrating that the nascent chain has native-like structure [27]. NMR studies of nascent chains containing tandem Ig domains and still attached to the ribosome revealed that the N-terminus domain folds to its native state while the C-terminus domain is largely unfolded and flexible [28]. Recent molecular dynamics simulations also conclude that small peptides may adopt a conformation that is similar to the one adopted in full proteins [29]. The discovery of the formation of disulphide bonds in nascent immunoglobulin peptides also confirms the ability of proteins to begin to fold whilst they are being synthesized [30,31]. As well as adopting native-like conformations while still attached to the ribosome, there is evidence that peptides can begin to fold whilst still in the ribosomal exit tunnel. Analysis of the ribosomal exit tunnel reveals that peptides can traverse the tunnel in an α-helical conformation [32], but that at no point is the tunnel big enough to accommodate structures larger than α-helices [33,34]. Peptides are not restricted to an α-helix, however, and may adopt more extended conformations [35]. Analysis of the exit tunnel has also shown that the tunnel can entropically stabilize α-helical conformations as they pass through [36]. The rate of in vitro refolding has often been observed to be slower than the corresponding rate in vivo [37,38]. Cotranslation has been studied in the bacterial luciferase αβ heterodimer, and the formation of the heterodimer is faster when the β monomer is translated in the presence of the folded α monomer than when the β monomer is refolded from a denatured state [38]. This shows that, under cotranslational folding, the β monomer is able to obtain a conformation that is more receptive to the formation of the dimer, thus avoiding kinetic traps associated with refolding from a denatured state [39]. Nativelike structure has also been observed in cotranslationally folding monomeric firefly luciferase; again, cotranslational and in vitro folding pathways appear to be different, with cotranslational folding being faster [40]. Cotranslational folding in P22 tailspike protein has been shown to guide the peptide away from aggregation-prone conformations that are frequently encountered when refolding in vitro, leading to the hypothesis that cotranslational folding could be an efficient strategy for the folding of β-sheet topologies, and for large, multidomain proteins in general [41]. One possible explanation for this is that the peptide begins to fold while still attached to the ribosome [42,43]. Another possible explanation is the existence of additional folding machinery contained in the cell; however, only approximately 20% of proteins associate, for example, with chaperones [44,45]. The removal of major chaperones, such as DnaK and Hsp70, in E. coli has no adverse effect on cell growth or viability [46,47]. This suggests that chaperones alone cannot account for the higher folding rates observed in vivo. Complementing these experimental findings, computational models of cotranslational folding have also been explored, an early, incidental, use of this idea appearing in [48]. Simple computational models of protein folding incorporating cotranslation demonstrate that such folding favours local contacts in intermediate and final folds [49,50]. More recently the effect of energy barriers on simple cotranslational models was studied, and it was found that the ground state of proteins folded sequentially was not necessarily the one of lowest energy [51]. Computational models have provided evidence that nascent chains may adopt partial structures similar to the corresponding parts of the complete protein [52]. Other lattice studies present a differing view of cotranslation where nascent peptides can remain largely unstructured until the final stages of synthesis (estimated to be when 90% or more of the protein has been extruded) [53]. This finding is dependent on the involvement of the C-terminal in tertiary interactions, and may not be applicable to all proteins. There is also evidence arising from lattice models that cotranslational folding pathways and refolding pathways are different [53]. Computational simulations of real proteins folding cotranslationally compared to refolding from a denatured state show mixed results. Chymotrypsin inhibitor 2 (CI2) and barnase were shown to fold mostly posttranslationally, with intermediates similar to those observed in refolding [54]. An alternative computational, cotranslational approach using dynamic optimisation in [55] found that major elements of the CI2 tertiary structure only form when the amino acid string is fully translated. For SFVP, which is known to fold cotranslationally [25], different pathways were taken during synthesis to those taken when folding from a denatured state [54]. A further promising approach is found in [56]. Pathways which minimize the difficulty of folding to the native state (for example, those which avoid having the chain pass through an opening) are found; results indicate that earlier folding is more likely around the N-terminus than the C-terminus, so pointing to an asymmetry of the folding process that is confirmed in the current work. Finally, there is also evidence of cotranslational protein folding that arises from numerical summaries of known protein structures. An analysis of structures in the Protein Data Bank (PDB) found that residues are, in general, closer to previously synthesized residues than those synthesized later, and that the N-terminal region was more compact than the C-terminal region [57]. It was argued that this provided evidence of cotranslational folding, however, these findings were contradicted by a later analysis of a larger set of proteins [58]. In the second study it was observed that the C-terminals were more compact and contained greater numbers of local contacts than Nterminals. Further analysis that considered topological accessibility (the ability of a protein to fold from a given residue as a starting point using only local contacts) found this to be more evident towards the N-terminus in the α/β class of proteins [59]. In a similar vein, Deane et al. [60] developed a measure of previous contacts which assesses the extent to which the chain forms contacts with previously extruded residues. They also found that the α/β class and ancient folds [61] exhibited such evidence of cotranslation. To date, protein structure prediction methods do not incorporate cotranslational effects. This paper describes such an algorithm and evaluates its performance. This evaluation reveals that, in more than 94% of cases, a sequential algorithm that follows the sense of translation, that is, from N-terminus to C-terminus, is more accurate than an algorithm that follows the reverse sense, from Cterminus to N-terminus. The success of the sequential algorithm is greater the more the target shows evidence of cotranslational folding. It is also found that a sequential algorithm can match, and on occasion better (in 51% of proteins tested), the performance of a leading nonsequential protein structure prediction algorithm, namely Rosetta. Structure prediction algorithms A sequential algorithm (SAINT, a Sequential Algorithm Initiated at the Nitrogen Terminus) was developed and used to predict the structure of a number of proteins. This algorithm uses the Rosetta program [62] (version 2.1.0), extending it to incorporate cotranslational aspects of protein folding. To investigate the importance of following the direction of translation, the sequential algorithm was adapted to predict the structure of proteins produced in the reverse direction, from the C-terminus to the N-terminus. Predictions from the sequential and reverse sequential algorithms were compared and they in turn compared to predictions made using an unmodified version of Rosetta. These algorithms are now described. Sequential algorithm SAINT extends the peptide by a nine residue fragment at each iteration, starting with the N-terminus. Each fragment is added in a fully extended conformation (ϕ = -150°, ψ = 150° and ω = 180°). The final fragment may contain fewer than nine residues; it will contain as many residues as are required to complete the full protein chain. At each extension the peptide is allowed to fold and the con-formation reached is used as the starting structure for the next extension, with Rosetta ab initio used to perform the structure predictions at each stage. In order to make comparisons between the sequential and non-sequential algorithms fair, each uses the same total number of cycles. For the sequential algorithm these cycles were distributed evenly amongst each extension of the peptide with the number of cycles calculated as follows. If b is a base number of cycles and l is the protein length then the total number of cycles t is b(l/100) and the number of extrusions e is Ll/9O. This results in n = Nt/eQ cycles for the first e -1 extrusions and tn(e -1) cycles for the final extrusion. Reverse sequential algorithm The reverse sequential algorithm is the same as the sequential algorithm. It differs only in that the peptide is extended from the C-terminus to the N-terminus. Non-sequential algorithm In non-sequential folding a protein is folded from a fully extended state. The Rosetta ab initio algorithm is employed for this process, using insertion from a library of fragments to build decoys (predicted structures). This has proved a successful technique for protein structure prediction in recent years [3,[63][64][65]. Rosetta can select fragments from the target, so the algorithm as used here is not strictly ab initio. The number of cycles (fragment insertions) used by Rosetta varies with protein length in this study. A base number of 34,000 cycles was used for a protein of 100 residues, and this number increased proportionately; for example, for a protein with 143 residues the number of cycles is increased by a factor of 1.43. This is reasonable as in the cell longer proteins take more time to be synthesized, and thus have more time to explore conformational space before synthesis is completed. Selection of targets In Deane et al. [60] a measure was developed, an Average Logarithmic Ratio (ALR), which assesses the extent of previous contacts within a peptide chain; proteins with positive ALR are expected to be those for which the cotranslational aspect of folding has a substantial impact, whilst proteins with negative ALR are expected to be those for which cotranslation has lesser impact. Two sets of targets were created from a PISCES [66] data set (<30% sequence identity, resolution better than 3 Å, at least 100 residues and no missing residues, downloaded 6 February, 2009). The first set contained protein chains with an ALR value of 0.15 or greater (total of 34 proteins), and the second contained chains with an ALR of -0.15 or less (total of 34 proteins); these two sets are referred to as the positive and negative sets respectively. For each protein in the two sets, 1000 decoys were generated with each of the algorithms described above (sequential, reverse sequential and non-sequential). GDT_TS values [67] were calculated for each of the resulting predictions. GDT_TS is ber of corresponding residues within iÅ and N is the total number of residues. It measures the closeness of corresponding residues in known and predicted structures, more heavily weighting closer pairs. It is helpful to see it in non-cumulative form as where . Larger sample size To establish whether the sample size (that is, the number of decoys produced for each protein) has an effect on the results, two proteins were subjected to a larger sampling. An additional 100,000 decoys were generated for the FLiG C-terminal domain of Thermotoga maritima (1qc7A) and also for 1ji4A, using the SAINT algorithm. Variability in peptide termini As the differences between mean GDT_TS scores for SAINT and reverse SAINT, for a given protein, prove to be generally small, additional tests were conducted to ascertain whether terminus loop regions could be causing the observed effects. The termini of proteins are often unstructured, and their structure can be highly variable and difficult to predict. Small mistakes in the terminus regions could lead to the small differences observed between the mean GDT_TS scores. The first N-terminus and last C-terminus secondary structure elements were identified in the experimental structure for each protein, and the termini up to the identified secondary structure element of the corresponding predicted model with the highest GDT_TS were removed. A secondary structure element was defined as a run of four residues with identical secondary structure assignment. Secondary structure was assigned from the experimentally determined structure with DSSP. In addition to these conditions the N-terminus and C-terminus secondary structure element had to be separated by at least five residues. GDT_TS scores were recalculated and counts taken of how often SAINT outperformed reverse SAINT and how often SAINT outperformed Rosetta. Clash analysis A possible reason for better performance of SAINT was conjectured to be that extrusion from the nitrogen terminus produces fewer steric clashes than does extrusion from the carbon terminus. In order to investigate this, ten protein sequences were selected on the basis of their mean GDT_TS scores: four in which SAINT performed better, three in which reverse SAINT performed better, and three in which SAINT and reverse SAINT performed comparably. For each protein, two of the 1000 models generated were selected for each of SAINT and reverse SAINT. The extent of steric clashes in conformations following folding, for five extruded lengths (18,36,54,72,90), were assessed using MolProbity [68], a web server that calculates a "clashscore", equal to the number of steric overlaps that are greater than 0.4 Å per 1000 atoms. Nine residues in fully extended conformation were then added at the C-terminus (for SAINT) or the N-terminus (for reverse SAINT) to produce strings of length 27, 45, 63, 81, and 99 and these checked again for steric clashes. For each of the five positions, the clashscore before the addition of nine residues was subtracted from the clashscore after the addition of the 9-mer fragment. An average of the differences in clashscores, across all five lengths, was taken for each protein sequence and each algorithm. The importance of sense To investigate why SAINT might perform consistently better than reverse SAINT, measures of secondary structure prediction quality were developed. For a given decoy, structural alignments for every overlapping fragment of 11 residues against the experimental structure were obtained, and the average C α -C α distance of the alignment was assigned to the fragment's center residue (fragments of 11 residues were chosen to provide insight into prediction accuracy on a more local scale than, for example, taking an entire secondary structure element). These residue-assigned distance measures were averaged across all residues in α-helices in the decoy (residue secondary structure was assigned by DSSP for the experimentally determined model) and these in turn averaged over all 1000 decoys. This was done for both the forward and reverse decoy sets. Finally, the forward helical prediction quality measure was subtracted from the reverse helical prediction quality measure. The same process was followed for β-strands. If directionality is not important in folding we would expect the accuracy of helical or strand predictions to be similar regardless of the direction of synthesis, resulting in the difference calculated above being zero. A positive difference would indicate that forward predictions were more accurate than reverse predictions while negative differences would indicate that reverse predictions were more accurate. One of the pro- teins in the positive set (1qc7A) and four in the negative set (1kf6D, 1mkaA, 1nekC and 1uz3A) contained no βstrand residues and, therefore, were not considered in the analysis. Results and Discussion The emerging partial conformations produced by SAINT for sequence 1qc7A are shown in Figure 1, using the most successful decoy. The six helices are seen to progressively take shape as the chain is extruded, with early conformations largely preserved. Results for SAINT, reverse SAINT and Rosetta for each of the proteins in the positive set (ALR ≥ 0.15, see Methods, Selection of targets) and negative set (ALR ≤ -0.15) are summarized in Table 1 and Table 2 respectively. The mean performance and best models produced by SAINT show that it predicts structures better than reverse SAINT in the majority of cases (Table 3). For example, SAINT Figure 1 Cotranslational structure prediction of the FLiG C-terminal domain (1qc7A; 101 residues). Segments of nine residues are extruded at a time except for the last segment which consists of two residues. One thousand decoys were produced; the particular simulation above produced the structure with the highest GDT_TS of 63.12%. In each sub-figure the N-terminal is coloured dark blue and appears at the center adopting approximately the same orientation; it cannot always be the same orientation due to changes in conformation as the protein folds. The mean GDT_TS and maximum GDT_TS for all 1000 decoys produced for each combination of protein and algorithm is shown. For both the mean and maximum GDT_TS the highest GDT_TS is shown in bold while the lowest is shown in italics. The mean GDT_TS and maximum GDT_TS for all 1000 decoys produced for each combination of protein and algorithm is shown. For both the mean and maximum GDT_TS the highest GDT_TS is shown in bold while the lowest is shown in italics. yielded a higher mean GDT_TS than reverse SAINT for 32 of the 34 proteins with positive ALR and equally, for 32 of the 34 proteins with negative ALR. Plots of the mean scores for SAINT, reverse SAINT and Rosetta for the positive set are given in Figure 2A, with proteins ordered from smallest to largest mean SAINT GDT_TS score. Corresponding plots for the negative set are given in Figure 3A. The consistent superiority of SAINT over reverse SAINT is evident, with the difference being slightly greater for the positive set. The largest such difference seen in all the data is 8.49%, observed between the means of SAINT and reverse SAINT for 3ezmA (negative set), and representing an increase in GDT_TS from 20.25% to 28.74%. Mean performances of SAINT and Rosetta indicate that Rosetta outperforms SAINT in both the positive (Rosetta 19.72, SAINT 19.50) and negative (Rosetta 18.26, SAINT 17.84) sets. The difference is greater for the negative set ( Table 3). Plots of the maximum scores for SAINT, reverse SAINT and Rosetta for the positive set are given in Figure 2B, with proteins ordered from smallest to largest maximum SAINT GDT_TS score. Corresponding plots for the negative set are shown in Figure 3B. When considering best performance, SAINT is again superior to reverse SAINT, and more so in the positive set. Rosetta is no longer superior when best performance is considered; SAINT outperforms Rosetta, for example, in 19 of the 34 proteins in the positive set. The most successful SAINT prediction in the positive set was found for 3vubA. It is shown superposed on the native conformation in Figure 4, together with superpositions of the best reverse SAINT and Rosetta predictions on the native conformation. SAINT captures the structure better than either reverse SAINT or Rosetta. A GDT_TS value of 30% or above is generally considered to ensure that a reasonable prediction is found [4]; a scan of Table 1 indicates that roughly one half (15 out of 34) of the best SAINT predictions are satisfactory, and similarly for Rosetta (16 out of 34). Larger sample size Summaries of the distribution of GDT_TS scores indicate that the size of the decoy sets used (that is, 1000) does not significantly influence their values (for 1qc7A, sample size of 1000 has min. 23 Variability in peptide termini The results of this test indicate that the differences in GDT_TS observed are not due to variability in the terminus regions of the peptides (data presented in Tables 4 and 5). Clash analysis The results are shown in Table 6. Four of the ten protein conformations examined have higher steric clashscores for SAINT than reverse SAINT. The steric clashscore appears not to be influenced by its mean GDT_TS score, evidenced by two (1mf7A and 2d00A) out of the four proteins with higher mean GDT_TS scores for SAINT having greater steric clashscores than reverse SAINT. Steric clashes produced by SAINT and reverse SAINT are generally comparable, so providing no evidence that fewer steric clashes are the reason for the better performance of SAINT. The importance of sense The differences obtained from both the positive and negative sets are shown in Figure 5. These results show that for both types of secondary structure SAINT is generally producing better predictions, but that the difference is more pronounced for strand residues. In 28 of the 33 proteins (85%) in the positive set the difference between forward and reverse folding is greater for strands than for helices (with 16 (48%) having a β-strand difference more than twice the α-helix difference). Similarly, in 26 of the 30 proteins (87%) in the negative set the difference between forward and reverse folding is greater for strands than for helices (with 19 (63%) having a β-strand difference more than twice the α-helix difference). These results indicate that in general SAINT is more accurate when predicting strands than is reverse SAINT. These differences are small, but they would account for the differences observed in the results. Discussion A consistent difference in prediction accuracy was seen between SAINT and reverse SAINT. SAINT is markedly superior to reverse SAINT, and slightly more so for proteins with positive ALR values. When looking in detail at SAINT and reverse SAINT, the differences observed are most likely due to the detrimental effect on strand predic-tion observed when elongating a peptide from the C-terminus to the N-terminus. SAINT produced decoys with a higher mean GDT_TS than reverse SAINT for more than 94% of proteins in both the positive and negative protein sets. The differences between mean GDT_TS scores for SAINT and reverse SAINT decoys were also bigger than those between SAINT and Rosetta decoys. If directionality played no part in the folding process it would be expected that there would be no difference in the predictive accuracy of extrusions from the N-terminus to C-terminus and extrusion from C-terminus to the N-terminus. Three possible explanations for these results are outlined below. Peptides, when extruded from the ribosome, start at the N-terminus. For this reason, fragments near the Nterminus are less influenced in their folding by the remainder of the peptide, since this has yet to emerge from the ribosome. On the other hand, fragments towards the C-terminus must fold in the presence of the Figure 3 Plots of mean and maximum GDT_TS for the negative set. Graphic A shows the mean GDT_TS scores for the 34 proteins in the negative set, for SAINT (red squares), reverse SAINT (blue circles) and Rosetta (green triangles), with the proteins ordered according to ascending mean SAINT GDT_TS. Graphic B plots maximum GDT_TS for proteins in the negative set, ordered by ascending maximum SAINT GDT_TS. Outcomes are the same as for the positive set, with all differences less marked. A 1aym1 1aym3 1ddlA 1qqp3 1e0cA 2edmA 3besR 1umhA 2ag4A 1kyfA 1l7lA 2b0aA 2e56A 1aocA 1mkaA 1tt8A 2tgiA 2bnqD 1dy5A 1y8cA 2ov0A 2awgA 1kptA 1dwkA 2nwfA 2owpA 1wt9B 1kf6D 3ezmA 1seiA 1uz3A 1p0zA 1nekC 2p25A 0 10 20 30 40 50 60 70 Maximum GDT_TS (%) B bulk of the peptide. Thus the conformation assumed by the early fragment is a local choice, in that it depends largely on the amino acid sequence of the fragment. The conformation reached by a later fragment is determined by more than its amino acid sequence, in that it also depends on surrounding structure. This behaviour is mimicked by SAINT but not by reverse SAINT, so providing an explanation for the consistently better predictive accuracy of SAINT. A second explanation arises from the way that the two algorithms allocate fragment insertions. At any stage, due to the constraints of Rosetta, fragment insertions are made uniformly across the currently extruded peptide length. The upshot is that more fragment insertions are attempted at the N-terminus than the C-terminus for SAINT while the opposite is true for reverse SAINT. Should it be the case that the N-terminus of the peptide is harder to predict than the C-terminus, SAINT would be more successful than reverse SAINT since SAINT puts in effort where it is needed. Due to the reasons stated above, however, we expect the N-terminus to be more easily predicted than the C-terminus. A third possibility is that Rosetta itself has some inherent directionality, so favouring SAINT over reverse SAINT. A study of Rosetta, however, provides no indication of such a directional bias. A strong correlation between mean GDT_TS and chain length is seen for both the positive and negative sets and for all three algorithms: as the chain length increases the GDT_TS decreases. 1oaaA is the only target over 200 residues in length that produced a set of decoys with mean GDT_TS greater than 20%, indicating that the versions of the algorithms employed in this study are not sufficient to accurately predict the structure of chains with more than 200 residues (this accounts for 50% of the positive set and 24% of the negative set). Excluding this data from the Figure 4 Superpositions of the best predictions for 3vubA on the native structure. The best decoy produced overall was by SAINT for 3vubA, whose native conformation is shown in a). The remaining graphics show the superposition of this native conformation with the best decoy produced by b) SAINT (GDT_TS = 67.57), c) reverse SAINT (GDT_TS =37.62) and d) Rosetta (GDT_TS = 51.24). The SAINT decoy best captures the native loop and sheet conformation; a loop error causes the C-terminal helix to be incorrectly oriented. Among the 1000 decoys produced for each protein with ALR ≥ 0.15 by each of SAINT, reverse SAINT, and Rosetta the best model (with highest GDT_TS) was found (as indicated in Table 1 by Maximum GDT_TS). Each of these selected models was then altered by chopping off the first N-terminus and last C-terminus secondary structure elements identified in its native structure. GDT_TS scores were then recalculated for each algorithm and are displayed below. The highest GDT_TS is shown in bold while the lowest is shown in italics. Sample size was reduced to 33 as no secondary structural element at least five residues in length was found at either terminal of the protein chain 2j01Vpdb2j01V. Among the 1000 decoys produced for each protein with ALR ≤ -0.15 by each of SAINT, reverse SAINT, and Rosetta the best model (with highest GDT_TS) was found (as indicated in Table 2 by Maximum GDT_TS). Each of these selected models was then altered by chopping off the first N-terminus and last C-terminus secondary structure elements identified in its native structure. GDT_TS scores were then recalculated for each algorithm and are displayed below. The highest GDT_TS is shown in bold while the lowest is shown in italics. analysis, however, makes no difference to the overall findings. Given that SAINT outperforms reverse SAINT it might be expected that SAINT would also outperform Rosetta, Rosetta being, in some senses, midway between the two. In best performance, arguably more important than mean performance, there is weak evidence that SAINT does outperfom Rosetta; for the positive set SAINT outperfoms Rosetta in 19 out of 33 instances (there is one tie) and for the negative set SAINT outperforms Rosetta in 16 out of 30 instances (there are four ties). An explanation why this remains weak at this stage is that SAINT remains crude, barely exploiting spatial and temporal advantages which may be available in cotranslational folding; we have simply used an iterative version of Rosetta. For example, at each extrusion, fragment insertions are chosen uniformly along the extruded peptide, whereas use of an insertion location distribution skewed towards the carbon terminus might be more realistic. To its credit, however, the SAINT versus reverse SAINT investigation exploits the power of a "paired comparison" design more effectively than does the SAINT versus Rosetta investigation, in that it contrasts opposites and so is more likely to reveal an effect. Conclusions This study has presented an algorithm that builds cotranslation into protein structure prediction. To assess the importance of the direction of translation the sequential algorithm was compared to a reverse sequential algo-rithm where the protein was produced from the Cterminus to N-terminus. Two sets of proteins were chosen: one where the residues have, on average, more contacts with previous residues than successive residues and the other where the residues have, on average, more contacts with successive residues than previous residues. The performance of the sequential algorithm for protein structure prediction was also compared with Rosetta, which folds from a fully elongated chain. When SAINT was compared to reverse SAINT a very pronounced difference was observed. When mean GDT_TS was used as the performance measure SAINT outperformed reverse SAINT for over 94% of targets from both the positive and negative sets. These figures were still high when the maximum GDT_TS was used as the performance measure, with SAINT outperforming reverse SAINT in over 91% of targets from the positive set and over 73% of targets from the negative set. The results show that Rosetta produces decoy sets with higher mean GDT_TS scores than SAINT for both the positive and negative protein sets, but that this superiority of Rosetta is not seen when the models with the highest GDT_TS scores are compared. If it were possible to always select the most accurate structure from the set of decoys then SAINT would, overall, produce a better prediction than Rosetta. The selection of the best decoy from a set, however, is a separate problem that is not addressed in this study. While Rosetta is producing decoy sets with higher mean GDT_TS scores than SAINT, examination of the differences between the means shows that the differ- Mean difference in clashscores for each protein sequence; the larger the mean difference, the more clashes created by the extrusion. The first four proteins in the table have higher mean GDT_TS scores for SAINT, the next three have higher mean GDT_TS scores for reverse SAINT and the remaining three have comparable mean GDT_TS scores for SAINT and reverse SAINT. There is no evidence that SAINT creates more clashes. ence is always small. Only on one occasion does a Rosetta decoy set have a mean GDT_TS greater than 2% above the corresponding SAINT decoy set (an increase in mean GDT_TS from SAINT to Rosetta of 2.4% for 1ji4A). It has been established that the size of the decoy set and flexibility of peptide terminus residues do not affect the distribution of GDT_TS scores. The sequential algorithm described in this study is in its earliest stages of development. Future work will include investigation of the effect of translation speed, allowing extruded segments to have variable length and the number of fragment insertion attempts at each iteration to vary. Improvements should also include incorporation of spatial restrictions to simulate the constraint of the ribosome tunnel. SAINT) in the secondary structure distance measure for helical (grey) and strand (black) residues. Positive values here indicate that SAINT is producing predictions that are more accurate than those of reverse SAINT. Evidently SAINT outperforms reverse SAINT for both types of secondary structure, but more strongly for strands and the negative set.
8,029
2010-04-07T00:00:00.000
[ "Biology", "Computer Science" ]
IOT, Industry 4.0, Industrial IOT... Why Connected Devices are the Future of Design This paper looks at Industrial Internet of Things (IIoT) technology looking at what it is, how it works, how it ’ s being used and why it ’ s changing the way we design the next generation of products. Predominantly companies investing in this technology are looking for ways to help improve the performance of their designs, leverage big data to help them make better decisions, provide more functionality to the end user and leverage services to create additional revenue streams. Through the Introduction Referred to in many forms, the Internet of Things (or 'IoT') is part of the next generation of technology set to disrupt the traditional manufacturing business model.These new products will offer unprecedented opportunities for manufacturers to reimagine the way they design, make and use things. With IoT comes the ability for products to communicate with their user, the manufacturer, to each other and to a broader system.These smart products will be able to sense the environment in which they're being used and communicate with one and other to optimise the way they work.And on-board data capture will enable products to become autonomous, allowing them to make real-time decisions to improve their performance or avoid costly maintenance issues. And consider for a moment the impact that these new connected products will have on world economies.Estimates suggest that the global impact of IoT will be in the order of US$6.2 trillion by 2025 1 so it will come as no surprise that as many as 75% of companies across industries already exploring IoT 2 .IoT offers the ability to capture enormous amounts of data and this provides designers with the unique ability to leverage real world information to improve the quality of the products they're making.In this increasingly connected world, the designers of the future will drive product advancement through the connected products of the future and IoT will be at the heart of this. What is Industrial IoT? IoT, an acronym for the Internet of Things, represents the next generation of smart, connected products that are becoming, more and more, an integral part of our lives.More typically associated with consumer products such as the Fitbit activity tracker or Nest thermostat IoT, these connected products offer the end user a unique experience not available with the designs of the past.The Nest thermostat, for instance, is self-learning and adjusts room temperatures based on your activity history whilst the Fitbit activity tracker syncs users' activity enabling them to access a comprehensive dashboard where they can see statistics and gather insight to help them achieve their goals.These types of products and services are becoming so prevalent that recent studies suggest that 33% of adults already use some form of IoT in their lives 3 . The Industrial Internet of Things (or 'IIoT') uses similar principles to that of IoT but applies them to the products used by companies to provide goods and services such as industrial machinery and vehicles.Industrial IoT potentially offers a greater market opportunity than that of consumer IoT as industrial machinery often requires considerable investment and ongoing expense. What makes Industrial IoT work? So what is it that makes IIoT products so smart?There are a number of key systems that comprise a IIoT product.Below is a list of the more critical aspects that should be considered: Product: Typically, an IIoT product incorporates mechanical hardware, electromechanical hardware, electrical hardware, electronic hardware and software solutions into the one system.Each of these systems play a critical role in the overall system, from gathering information through sensors to storing or transmitting data. Sensors: More affordable and smaller in size, sensors have never been more accessible than they are today.An electro-mechanical device that gathers information about its surroundings, sensors can measure a wide range of information including pressure, force, humidity and voltage, to name a few. Connectivity: From the sensors comes the data, but it needs to be transmitted from the product to the broader system where the information can be further analyzed.Typically, this is done using communication protocols with some of the more common being Bluetooth, Wi-Fi and cellular, each offering different range, power consumption and internet connection characteristics.A gateway then normalizes data from various sources into a common format and prepares it for transmission via the internet. Cloud: The best place to gather and analyze all of this data is in the cloud.Here the data can be securely stored, processed and analyzed.Combining software, a big data engine, application platform and a database, the cloud is able to organize and analyze data, provide product insight, and visualize results, with much of this happening in realtime. External Information: An often overlooked aspect of an IIoT system is the ability to use external sources as further input.This might be weather, traffic, prices or maps, all of which can be gathered from the internet.Gathering information from CRM or PLM systems further enables valuable insight into a products performance. Visualization of Data: Data visualization is the user interface that enables end users the ability to control their products remotely.Often in real-time, data visualization allows users to see trends, compare products and track information.Commands can also be sent through to products, essentially enabling remote access or control via the cloud. What are some of the more typical results? In general, Industrial IoT products have the ability to take advantage of the following key benefits: Closed Loop Design: Using data gathered from real world use, designers are able to better understand how products are being used.This further enables them to design better performing products that meet the needs of the end user. Increased Consumer Value: With the ability to share valuable information, offer unique features and functionality, as well as provide more convenience and functionality, IIoT products provide the end user with a better experience. Predictive Maintenance: Machinery uptime is critical for any industrial machinery manufacturer.The ability to us IIoT and gather data offers users the ability to implement predictive maintenance and avoid machine downtime before failures occur. New Service Lines: Implementing IIoT enables manufacturers the ability to obtain new revenue services through predictive maintenance programs, offering remote monitoring services, and better enabling remote software updates and improvements. How is it used by industry? The adoption of IIoT is rapidly advancing.With manufacturers across a broad range of industries adopting this technology there are numerous examples available that highlight the value that IIoT has for manufacturers from various industries, and of all shapes and sizes.The following example highlights some of the key advantages discussed in this paper. Premier Deicers: In cooler climates, de-icing of planes is a common but critical process used to ensure the performance and reliability of aircraft during takeoff.For the airline company any excessive delays in the de-icing process can result in lost revenue and profitability.Furthermore, the fluids used are hazardous and require extensive reporting and compliance.This is the exact reason why a large regional airline in the US looked to IIoT to help solve some of their problems. The airline company worked with their de-icing services provider, Premier Deicers, to develop a real-time fluid management system that would better enable them to understand and control various aspects of the de-icing process. Electronic equipment, in the form of J1939 CanBUS monitoring and GPS hardware, was retrofitted to new and existing de-icing machinery enabling Premier Deicers the ability to gather information related to engine diagnostics, fluid volume and dispensing data.And integration of Autodesk's Fusion Connect IIoT management platform meant that data taken from the machinery could be used to better understand a wide range of critical processes. Having implemented the system Premier Deicers is better able to understand the time taken to de-ice planes and how much fluid is used during the process.Furthermore, vehicle monitoring enables them to better predict potential engine failures and implement predictive maintenance to reduce the chance of machine downtime. Overall, adoption of IIoT has resulted in a number of key benefits to Premier Deicers and/or their customer/s: Summary and Conclusion Not since the Industrial Revolution have we seen such considerable change to the manufacturing industry as we're experiencing at the moment.And Industrial IoT is changing the types of products we design giving designers the ability to incorporate onboard electronics that can gather and share information around a products use. Furthermore, the ability to gather information on a products use enables designers to better understand the conditions under which it is used and, ultimately, help them improve the performance of products.This closed loop design process will enable designers to create products that better meet the needs of their end users whilst, at the same time, be more reliable. Though there will be considerable challenges during this period of moving to connected products, the opportunities to offer unique services to customers will likely cre- Figure 1 : Figure 1: WHITEGOODS ARE JUST ONE INDUSTRY TO BE DRASTICALLY IMPACTED BY IOT. Figure 2 : Figure 2: REMOTE MONITORING AND CONTROL OF MACHINERY VIA IIOT.
2,088.8
2017-02-09T00:00:00.000
[ "Engineering", "Computer Science", "Business", "Environmental Science" ]
End-to-End Differentiable Physics Temperature Estimation for Permanent Magnet Synchronous Motor : Differentiable physics is an approach that effectively combines physical models with deep learning, providing valuable information about physical systems during the training process of neural networks. This integration enhances the generalization ability and ensures better consistency with physical principles. In this work, we propose a framework for estimating the temperature of a permanent magnet synchronous motor by combining neural networks with the differentiable physical thermal model, as well as utilizing the simulation results. In detail, we first implement a differentiable thermal model based on a lumped parameter thermal network within an automatic differentiation framework. Subsequently, we add a neural network to predict thermal resistances, capacitances, and losses in real time and utilize the thermal parameters’ optimized empirical values as the initial output values of the network to improve the accuracy and robustness of the final temperature estimation. We validate the conceivable advantages of the proposed method through extensive experiments based on both synthetic data and real-world data and then provide some further potential applications. Introduction In recent years, environmental protection and renewable energy have gained increasing attention [1], and in the automotive industry, traditional fuel vehicles have gradually been replaced by more environmentally friendly new energy vehicles.Electric motors are one of the essential components of new energy vehicles, and permanent magnet synchronous motors (PMSMs) are widely used due to their high efficiency, simple structure, and high power density.However, the temperature inside the motor will rise sharply during operation, posing risks of insulation failure and demagnetization [2] due to exceeding thermal limits.How to estimate the temperature distribution inside the motor accurately and stably is a key issue that must be focused on for practical use. The temperature estimation methods for PMSMs are mainly classified into two categories: sensor-based and sensorless methods.Sensor-based methods involve directly measuring the temperature at certain positions inside the motor using thermal sensors [3,4].However, these methods involve additional costs and manufacturing complexities, making them unsuitable for large-scale industrial production.Moreover, the repairing and replacing can be time-consuming and costly when encountering sensor failure. Sensorless methods can be further divided into direct and indirect methods.Indirect methods include flux observer [5,6] and signal injection [7,8].Direct methods generally predict the temperature at the internal positions of the motor by directly establishing a thermal model.Among direct methods, lumped-parameter thermal network (LPTN) [9] is the most widely used, which replaces the motor with some nodes.The complex thermodynamic behavior inside the motor is equivalently modeled as interactions between these nodes, based on the flow paths of heat, the law of heat conservation, and the mechanism of heat generation [10].Parameters such as thermal losses, thermal capacitances, and thermal resistances in this thermal model can be obtained through theoretical or empirical formulas [11], finite element analysis (FEA) [12], computational fluid dynamics (CFD), or different data-driven methods [13,14].Another common approach is treating temperature estimation as a time-series prediction problem [15][16][17] utilizing supervised learning to fit nonlinear relationships based on data.However, pure data-driven methods commonly lack physical interpretability, diverge from physical mechanisms, and fail to utilize the actual physical information of the motor. Recently, the concept of physics-informed machine learning (PIML) or physics-based deep learning (PBDL) has gained prominence.These approaches combine prior knowledge of physics with data-driven methods, which is very helpful when training data are scarce, model generalization is limited, or some physical constraints need to be satisfied.One adds the differential equations of dynamic systems as several regularization terms into the loss function, corresponding to the physics-informed neural network (PINN) [18,19].Therefore, the backpropagated gradients contain information provided by differential equations.Another approach integrates the complete physical model with deep learning.In the context of the motor temperature estimation problem, several potential integration patterns are illustrated in Figure 1.Among them, the neural network first often requires the physical model to be differentiable, namely, differentiable physics (DP) [20][21][22], so as to enable the backpropagation of gradients.In this work, we propose a lightweight end-to-end trainable framework for temperature estimation by integrating neural networks, differentiable physical models, and simulation results.Specifically, according to the real geometry, material properties, winding and cooling configurations, and other information of the investigated PMSM, we establish a corresponding thermal simulation model in MotorCAD, which is an electromechanical design software.The simulation model provides the structure of the thermal network and simulated thermal parameters, including thermal losses, capacitances, and resistances, that can serve as reasonable initial values.Considering the time-varying characteristic of thermal parameters, a neural network for parameters correction is introduced.The network dynamically adjusts the thermal parameters based on the real-time operating conditions and temperature distribution.The corrected parameters are then fed into the corresponding differentiable LPTN, which significantly improves the accuracy of temperature estimation.To the best of our knowledge, it is the first time in the literature that the integration of differentiable physics into the domain of motor temperature estimation has been investigated. The principal conclusions drawn from this work highlight the effectiveness of the proposed method in accurately estimating motor temperature using both synthetic and real-world data.The integration of physical principles through a differentiable physics model not only improves the accuracy and robustness of temperature estimations but also maintains consistency with physical mechanisms.This method is deemed highly practical, offering a significant improvement over purely data-driven methods by incorporating physical model constraints and simulations, which result in more reliable and physically consistent outcomes. Related Work Most prior works based on LPTN primarily focus on how to identify the thermal parameters.Veg and Laksar [23] established a seven-node LPTN for a high-speed permanent magnet synchronous motor and calculated thermal resistances and other parameters using heat transfer coefficients.The accuracy of this method based on the theoretical formula is limited.Choi et al. [13] utilized measured data under different operating conditions and employed the least square method to obtain a set of optimal fixed thermal parameters, but this method is unable to ensure the physical consistency of the results and ignores the time-varying characteristic of thermal parameters.Wallscheid and Böcker [24] constructed a four-node LPTN for a 60 kW HEV permanent magnet synchronous motor.Using the global particle swarm optimization algorithm and extensive measured data, they identified the unknown coefficients in empirical formulas, while considering various physical constraints and prior knowledge like heat transfer theory.This method effectively adds prior knowledge into the optimization algorithm, but the explicit empirical formulas generally make some simplifications, making it difficult to capture different or more complex nonlinear patterns.Kirchgässner et al. [25] viewed the four-node LPTN as a recurrent neural network and then proposed a so-called thermal neural network.At each time step, the thermal parameters that lose physical meanings were directly predicted by independent neural networks and then computed the temperature after discretizing the differential equations of the corresponding LPTN.The error between the estimated temperature with ground truth was used to update the neural networks in the end.However, their method predicted thermal parameters merely based on data, still towards a data-driven fashion.When discarding the neural networks, the remaining cannot work independently as a physical model, and the behavior of the neural networks is relatively uncontrollable and prone to violate physical consistency.Wang et al. [26] established a ten-node LPTN for an automotive PMSM and incorporated three independent neural networks to predict thermal parameters based on theoretical values.This is a feasible attempt that combines physical models with neural networks.However, they neglect the deviation between theoretical and real values of thermal parameters, which limits the final accuracy and robustness and is unable to ensure that the estimated temperatures at all nodes in LPTN conform to physical reality when underconstrained.Additionally, their work lacks more in-depth experiments and analyses, as well as comparisons with other algorithms to validate the method and the rationality of certain settings. Background The main idea of LPTN is to simplify the representation of various components inside the motor (such as windings, stator, rotor, etc.) by using lumped nodes and then represent heat flows through an equivalent circuit diagram.Each node has a thermal capacitance to characterize the heat storage capacity of the corresponding component.There typically exists a thermal resistance between every pair of nodes, reflecting the heat transfer process between internal components of the motor.Additionally, several components may generate power losses, such as copper loss, iron loss, etc.The losses are the major factor causing the change in internal temperature distribution.A schematic of the i-th node in a typical thermal network is illustrated in Figure 2.For node i, based on heat transfer theory and heat diffusion equation [27], the following simplified ordinary differential equation can be derived [25]: where R denotes the thermal resistance between nodes, C the thermal capacitance, P the loss, and ϑ the temperature.The number of thermal resistances generally increases quadratically with the number of nodes.For a thermal network with n nodes, the equations can be combined and written in the following matrix form: with From the perspective of state space, the state variable ϑ represents the temperature at each node, A is the state transition matrix, and B is the input matrix.If the matrices A and B are time-invariant, then given the initial condition of temperature ϑ 0 , the temperature ϑ t at each time can be calculated as follows: However, in practical situations, the matrices A and B vary with time, because the capacitances and resistances actually change with the operating points and the temperature distribution inside the motor.For example, as the speed increases, the thermal resistances related to ventilation may decrease accordingly.The losses vary due to different speed and torque during operation; thus, the total loss as well as the ratio between losses is variable.Therefore, the key to improving the accuracy of temperature estimation lies in determining A, B, and P at each step, that is, thermal capacitances, thermal resistances, and losses.Then, several numerical methods can be used to solve Equation ( 2), such as forward or backward Euler, Runge-Kutta methods, etc. Implicit methods generally have better numerical stability.Taking backward Euler as an example, the equation can be discretized as follows: then We can implement this equation in an automatic differentiation framework, as it is entirely matrix-based, so the gradients will not be blocked. Differentiable Physics Temperature Estimation Framework We have implemented a differentiable LPTN in PyTorch and incorporated a neural network to dynamically correct thermal parameters online.The specific estimation framework is shown in Figure 3, which illustrates the flow path to estimate the temperature at each timestep.In general, the raw simulation thermal parameters need to be optimized first to obtain the optimized values that are more in line with the reality (thermal parameter optimization) and then fine-tuned by a neural network to compensate for the relatively small time-varying change (dynamic correction).After that, these thermal parameters are used for solving Equation ( 2) to obtain the estimated temperatures (differentiable LPTN), which are then transferred to loss calculation and gradient backpropagation during training.A detailed explanation for different components is provided in the following. Thermal Parameter Optimization For a thermal network with n nodes, there typically exist n thermal capacitances, C 2 n thermal resistances, and less than n thermal losses.These thermal parameters' simulated values (SVs) exported directly from simulation software, while based on relevant physical theories and empirical formulas, often diverge from their real-world counterparts due to model simplification, the diversity of operating conditions, and environmental impacts.This discrepancy can lead to a decrease in the accuracy of the estimation model.Hence, before directly utilizing these simulated thermal parameters, it is crucial to optimize them to better align with the measured data, which is the key step in enhancing the final estimation accuracy.Therefore, we add a scaling ratio vector W SR corresponding to thermal capacitances and resistances, which is a learnable parameter, into our framework.By element-wise multiplying simulated values of capacitances C sv and resistances R sv with W SR , we obtain the optimized values (OVs) for these thermal parameters, namely, optimized values of capacitances C ov and resistances R ov .That is, where the learnable W SR is updated via gradient descent to improve the final temperature estimation accuracy during the training process. For the simulated values of losses P sv , first, the current operating condition x t (including speed, torque) is used to determine the total loss based on a lookup table (LUT) derived from real-world motor testing.By normalizing P sv (i.e., element-wise division by the sum) and then multiplying it with total loss, a more accurate P ov is obtained.That is, Dynamic Correction After obtaining the optimized thermal parameters P ov , R ov , and C ov , considering the time-varying characteristic of these parameters, we introduce a neural network into our framework.Taking into account the mechanisms of change and influencing factors of these thermal parameters, the network inputs operating conditions x t (such as speed, torque, coolant temperature, and ambient temperature) and the estimated temperatures of all nodes at the previous time.Then, it outputs the correction vectors α t P , α t R , and α t C , corresponding to P ov , R ov , and C ov , respectively.The learnable weight is W NN .This step allows for the fine-tuning of the optimized thermal parameters dynamically to improve the final accuracy of temperature estimation.For the i-th node in the lumped parameter thermal network model at time t, its loss P t i , thermal capacity C t i , and thermal resistance R t i,j between node i and node j are adjusted accordingly, that is, Using these corrected thermal parameters, the temperature at the next moment can be calculated by Equation ( 5) and then used for loss calculation as well as gradient backpropagation. To avoid parameter coupling between W SR and W NN and limit the parameter feasible regions during the actual training of the proposed framework, it is better to conduct the training in two steps.First, the W SR is trained to obtain optimized thermal parameters.This step significantly reduces the temperature estimation error and, due to the fewer learnable parameters of W SR , is unlikely to result in overfitting.Then, the W NN is trained to represent the time-varying characteristics of thermal parameters.At this point, with the error already reduced after the first step, the initial phase of training is less prone to challenges such as gradient explosion, severe fluctuations, or falling into poorly generalized local minima. Loss and Backpropagation The corrected thermal losses, capacitances, and resistances are fed into the subsequent differentiable LPTN to estimate the temperature.The estimated temperature is then compared with the true temperature.Finally, the gradients are backpropagated to update W SR and W NN . In this work, the loss function includes not only the error between the estimated temperature ϑ t and the true measured temperature θt at each time step, denoted as L Data , but also an additional term related to the error between the temperature change rate dϑ/dt and d θ/dt, denoted as L ODE .This transient characteristic is primarily introduced by thermal capacitances.Therefore, adding this loss term is also beneficial for the training.The weight of these two loss terms is adjusted by the coefficient β, i.e., L = L Data + βL ODE .Different β results in different learning curves and accuracy, which is a hyperparameter. One can see that temperature estimation is essentially an iterative process that requires real-time operating conditions and the temperature information of the previous time.Therefore, the proposed framework in this paper works like a recurrent neural network (RNN).To avoid excessively long sequences that incur gradient explosion or gradient vanishing, we employ truncated backpropagation through time (TBPTT), a method commonly used to train RNN-like networks, to train the proposed framework.As shown in Figure 4. Specifically, we need to manually truncate the temperature sequence into smaller segments and then backpropagate the errors through these segments during training. Simulation In this section, we first establish a fine-grained simulation model of the PMSM based on MotorCAD.Then, we generate simulation data under various operating conditions to validate the effectiveness of the proposed method.Finally, we investigate the performance and behavior of the framework under different settings through multiple experiments. Thermal Simulation Model The motor investigated in this work is an 8-pole, 48-slot PMSM designed for automotive use.The fundamental geometric and material parameters are presented in Table 1.The motor's hairpin winding consists of 5 layers, connected in a Y configuration.To establish a corresponding simulation model in MotorCAD software, we first need to specify more detailed actual geometric parameters in the geometry panel, including radial and axial dimensions, for example, stator inner and outer diameters, axial length, slot depth and width, number of layers of permanent magnets, and the length and angle of each layer, shaft diameter, cooling ducts diameter, etc.The configured radial section, axial section, and 3D view are shown in Figure 5.Then, it is necessary to set the specific connection of the winding.The software supports directly selecting hairpin windings and allows customization of the winding connections.The customized winding connections are shown in Figure 6.By setting the materials of the stator, rotor, and permanent magnets, the software itself provides material-related properties such as thermal conductivity, specific heat, density, etc.For thermal simulation calculations, the cooling of this motor includes housing water jacket cooling, rotor water jacket cooling, and winding end spray, which can be found in Figure 5.The temperature of these coolants is controllable and measurable. Finally, we can manually formulate duty cycle data for transient temperature calculation.The definitions of duty cycle mainly include torque-speed, loss-speed, and current-speed.When calculating, MotorCAD can build a thermal network based on the actual information of the motor and obtain simulation values for thermal parameters through theoretical and empirical formulas.The fine-grained simulation LPTN includes 135 nodes and is based on the actual geometric parameters, material properties, windings, and cooling system configurations.Subsequently, a simplified thermal model is developed, which consists of 10 nodes, as shown in Figure 7 Apart from the thermal resistances between the coolant nodes, there are in total 42 thermal resistances.Similarly, the software can provide simulation values for thermal parameters in the simplified thermal model, including torque-speed grid loss data, R sv , and C sv .The torque-speed grid loss data are utilized for obtaining P sv by bilinear interpolation. Synthetic Data We randomly select from candidate operating points within the motor's maximum torque/speed curve for constructing a specific set of operating conditions.Subsequently, these conditions are imported into MotorCAD, and the fine-grained thermal model is simulated to obtain temperature data as ground truth.With the simplified thermal model and the corresponding simulation thermal parameters, our proposed method is employed to enhance the temperature estimation accuracy of nodes in the simplified thermal model, thereby validating the effectiveness of our approach.Different sets of candidate operating points are used for generating training and testing conditions to avoid overlap, as indicated by circles in Figure 8.We finally generated 30 training conditions (20 transient conditions + 10 steady conditions) and 10 testing conditions (5 transient conditions + 5 steady conditions).Each set of conditions has a duration of 800 s and the frequency is 2 Hz. Validation Based on Synthetic Data As described in the previous chapter, firstly, we optimize the simulation thermal parameters that are directly exported from the software with all training data using gradient descent to obtain R ov and C ov .The training process contains 1400 epochs with a small learning rate of 1 × 10 −5 and the error curve during training is shown in Figure 9. (SRFKV 06( WUDLQ WHVW . The error curve for optimizing the simulation thermal parameters, which are directly exported from the software.This step corresponds to thermal parameter optimization in Figure 3. Then, we set the neural network with two hidden layers with sizes of 32 and 64 neurons, respectively, and use Hardswish [28] as the activation function.The optimizer is Adam and initial learning rate is 1 × 10 −4 with cosine annealing decay strategy.The training contains 1200 epochs, with a tbptt size of 1024 and mean squared error (MSE) as loss function.The error curve for the mean absolute error (MAE) and MSE of 7 nodes is as shown in Figure 10. Figure 11 shows the estimation results of the proposed method.Compared with the results calculated merely based on simulation parameters, it can be seen that the proposed method can achieve excellent accuracy in areas with drastic temperature changes.To better understand the behaviors of the network, further exploration of model interpretability is conducted.It is meaningful to observe the distribution of correction ratios.Hence, we create a histogram that represents the frequency distribution of correction ratios for all thermal resistances and thermal capacitances in the testing set, as shown in Figure 12 and Table 3.This provides insights into how the corrections are distributed across different components and nodes in the thermal model. The Importance of Simulation Values We firstly investigated the necessity of P sv , C sv , and R sv , which indicates whether the introduction of simulation values will have an impact on the final temperature estimation accuracy.When making predictions without relying on some simulation values, the network may directly predict values instead of ratios.In this situation, when initializing, the total loss is evenly distributed among the seven nodes.For resistances, considering most of the simulation values are small, all thermal resistances are randomly initialized with a mean of 1/e, and the network's outputs undergo exponentiation with base e to obtain the final predicted thermal resistances.For capacitances, similarly, the simulation values are in the range of hundreds to thousands, so each node's thermal capacitance is initialized to around 1200.The outputs of the network need to undergo exponentiation with base 10 to obtain the final predicted thermal capacitances.Such conversion also ensures non-negativity.Furthermore, experiments are conducted under different data sizes, including all data (20 + 10), twelve transient and eight steady conditions (12 + 8), and seven transient and three steady conditions (7 + 3). Loss Term L ODE For the loss function L = L Data + βL ODE , the weight of the differential term loss L ODE can be adjusted by the coefficient β.As mentioned before, the thermal network's transient characteristics are caused mainly by thermal capacitances.Intuitively, adding a transientrelated loss term can benefit the training of the neural network.Therefore, we compare four sets of experiments: β = 0, β = 10, β = 100, and using only L ODE .It is important to note that the previous researches are based on β = 0.When β = 10, the ratio between L Data and L ODE is approximately 10:1, and when β = 100, it is about 1:1. Without Correcting One As shown in Figure 3, considering the time-varying characteristic of thermal parameters, there exists dynamic correction for thermal capacitances, resistances, and losses, respectively, namely, α t P , α t R ,and α t C .To examine the impact and necessity of the dynamic correction, the following three different settings are conducted: (1) without correcting capacitances, that is, the capacitances remain unchanged rather than dynamic correction during training and testing; (2) without correcting resistances, that is, the resistances remain unchanged rather than dynamic correction during training and testing; (3) without correcting losses, that is, the losses remain unchanged rather than dynamic correction during training and testing. Bench Testing We have set up an experimental test bench, as shown in Figure 13.Thermal sensors are used to measure and record temperature data at various positions inside the motor for subsequent validation. Due to the limitations, we have only measured the three parts of windings (front end, active, and rear end), along with the stator tooth.The internal thermocouple layout scheme is illustrated in Figure 14.There are a total of twenty-four thermocouples, with eight in each layer at the front end and rear end.For the windings in the slot, sensors are placed below the third layer, axially in the middle of the iron core.A total of five thermocouples are arranged for the stator tooth, which are axial in the middle of the iron core. The acquisition frequency is 10 Hz and the measured temperatures of sensors under a specific operating condition are shown in Figure 15 as an example.It can be observed that the data contain substantial noise.The motor's placement, cooling conditions, and variations in different winding layers all result in considerable fluctuations in temperature at the same end of windings.We view the average temperature of the sensors in each part of winding as the ground truth, corresponding to Wdg_F, Wdg_A, and Wdg_R. To obtain more reasonable simulation thermal parameters, this section begins by adjusting the relevant settings of the simulation model.This mainly includes settings related to cooling.The simulation model's efficiency map is then close to that of the measured motor.At this point, the losses interpolated from the simulation can be considered good initial values for subsequent training.To reduce the computational cost and further alleviate the impact of noise in the high-frequency data, we further downsample the measured data from 10 Hz to 2 Hz.The experimental data were divided into training and testing sets.The training set includes data from nine operating conditions: six steady conditions (continuous performance testing) and three transient conditions (peak performance testing).The testing set includes data from two operating conditions: one steady condition and one transient condition.The total training dataset consists of 46,000 records, similar to the data size of synthetic data.However, it is important to note that the measured data have fewer types of operating points and contain ubiquitous noise, making it more complicated compared to the simulation data.Firstly, to reduce the error of the simulation thermal parameters directly exported by the simulation software, the simulation thermal parameters are optimized using the measured temperature data to obtain optimized values that better align with the measured data.Given the presence of noise in the data, a smaller learning rate and fewer training epochs are used.In this experiment, a learning rate of 5×10 −4 and 150 epochs with SGD are used. Then, due to the limited amount of data, especially the limited variety of data types, the number of neurons in the second hidden layer of the network is reduced to 32.Considering that the network can output temperatures for seven nodes but only label data for three nodes are provided, this essentially constitutes an under-constrained optimization problem.The previous section shows that the dynamic correction is merely fine-tuning optimized thermal parameters.Therefore, we artificially restrict the magnitude range of α t P , α t R , and α t C .This also highlights one of the advantages brought by incorporating simulation parameters.Additionally, it is important to note that we do not utilize any normalization layers that are commonly used in deep learning, such as LayerNorm [29] or BatchNorm [30].It is because after normalization, the original physical meaning of an input cannot be preserved.For instance, different physical quantities like speed or torque may be mapped to the same value after normalization, therefore losing comparability between two operating points.This contradicts the principles of physical mechanisms, brings severe fluctuations, and affects the final accuracy, despite speeding up the training speed in the early and middle stages through our experiments. We use Adam for 1000 epochs and the initial learning rate is 1×10 −3 .The tbptt size remains 1024.Since we almost remove outliers during the data processing stage, we use mse loss, which is helpful for reducing the maximum error.The average error during training and the final accuracy is shown in Figure 16 and Table 5.The temperature estimation results on the test conditions are shown in Figure 17. Method Comparison We have compared two common models in time-series regression prediction with comparable number of learnable parameters, namel, long short-term memory (LSTM) and temporal convolutional network (TCN).Additional steps such as data standardization and feature engineering are performed for these two.Referring to reference [31], the exponential moving average (EWMA) and exponential moving average standard deviation (EWMS) are calculated for speed, torque, current, voltage, power, and coolant temperature with window sizes of 200, 400, resulting in a total of 22 features.It is noteworthy that the proposed physics-based temperature estimation framework only requires speed, torque, and coolant temperature, without the need for any feature engineering.However, for LSTM and TCN, we found that without feature engineering, comparable prediction results could not be achieved with such a small dataset.We manually choose specific hyperparameters to achieve better accuracy for both algorithms, which are shown in Table 6.The final prediction accuracy is compared with the results of the proposed method in Table 7.It can be observed from Figure 18 that both models have enough fitting capability, achieving very low errors on the training set.However, they show a significant overfitting, as evidenced by the noticeable gap in accuracy on the testing set and a relatively large maximum error.Notably, LSTM performs worse than TCN, possibly due to TCN's ability to better capture both local and global patterns in the data.Next, we investigate the impact of data size on the accuracy of different methods, as shown in Figure 19 and Table 8.The variation in accuracy under different data sizes can effectively examine the robustness and stability of different methods.It can be observed that the proposed method consistently achieves better results, regardless of whether based on simulation values or not.The accuracy remains relatively stable with varying data size.In contrast, the accuracy of data-driven algorithms undergoes a significant decline, although they still perform well on the training set.Considering both mean squared error and maximum error, the proposed method obtains the best results with minimal sensitivity to data size.Due to the incorporation of physical priors and physical constraints, the proposed method is less dependent on data and more effective in extracting information contained within the data. The Temperature Estimation of Stator Tooth This section explores the estimation result for the stator tooth under different settings.Since the proposed method can simultaneously output temperatures for all nodes, this analysis serves as an extension to validate the framework and an example to demonstrate the potential application.The stator tooth's temperature data are not involved in the training, making the tooth's temperature in the training set also suitable for evaluating the final performance.Figure 20 illustrates the estimated tooth's temperature obtained by the proposed method with simulation thermal parameters for four different operating conditions.It is noteworthy that even without providing any measured data of the tooth, the framework, guided by the thermal network structure and physical priors, achieves considerable accuracy in estimating the temperature of tooth. Table 9 further illustrates the estimation results when not based on some SVs.It is worth noting that these models all exhibit relatively small errors on three winding nodes.However, when discarding R sv or when no simulation values are used at all, the estimation errors become significantly large.It should be pointed out that, if relying solely on the simulation thermal parameters and without considering dynamic correction, the corresponding tooth temperature estimation errors MAE, MSE, and MAX are 9.34 °C, 119.83 °C², and 29.22 °C, respectively.The neural network trained with three SVs achieves the highest accuracy, followed by without C sv , as the number of thermal capacitances is small and the given initial values are relatively reasonable.However, when all SVs are not provided, the estimated tooth temperature is essentially meaningless. Discussion Previous studies have largely focused on either purely data-driven methods or models heavily reliant on physical principles without integrating the advantages of machine learning techniques.This paper proposes a temperature estimation framework that integrates physical information with data-driven methods.The proposed framework effectively combines neural networks, differentiable physical models, and simulation results and addresses the limitations of purely data-driven methods (lack of physical interpretability and potential divergence from physical principles) and purely physical models (rigidity and potential inaccuracies in modeling complex real-world phenomena).The effectiveness of this method is validated by using both synthetic data and measured data, including a thorough ablation study of various settings, diverse comparisons with common data-driven methods, and the exploration of temperature estimation for the node without any associated labels.Due to the incorporation of physical principles, the output temperatures are more reasonable and robust, and the overall results exhibit better physical consistency.This method holds significant practical value and is crucial for optimizing motor performance, extending lifespan, and ensuring safety in applications where thermal management is critical. While the current findings are promising, several future research directions can further enhance the framework's applicability: • Validating the proposed method's effectiveness and generalization ability by utilizing a more extensive and diverse set of real-world data; • Investigating other neural network architectures, such as graphic neural networks (GNNs) or convolutional neural networks (CNNs), could provide insights into their efficacy in capturing temporal dynamics and spatial relationships within motor systems; • Implementing the framework in real-time control systems and validating its performance in operational environments would be a crucial step toward its industrial application.Funding: This research received no external funding. Figure 1 . Figure 1.Different ways of combining physical models with neural networks.(a) Neural networks first; (b) physical models first; (c) parallel.For (a), the output of the neural network is fed into the following physical model.For (b) and (c), the gradients generally do not directly flow through the physical model, and the neural network primarily serves to learn the residual error. Figure 2 . Figure 2. The i-th node in a typical LPTN. Figure 3 . Figure 3. Differentiable physics temperature estimation framework.(1) Thermal parameter optimization: simulation thermal parameters are optimized to obtain better initial values.(2) Dynamic correction: the neural network predicts several correction ratios at each time step.These two outputs are then fed into the downstream differentiable thermal model to obtain the estimated temperature. Figure 4 . Figure 4. Truncated backpropagation through time (TBPTT) for training the proposed framework. Figure 5 . Figure 5. Simulation model in MotorCAD.(a) Radial section; (b) axial section; (c) 3D view.Different colors represent different components of the motor, while the arrows in (b) and (c) denote the cooling paths in the simulation model. Figure 8 . Figure 8. Candidate points (represented by circles) for constituting operating conditions.(a) Training; (b) testing.For both training and testing conditions, choose non-overlapping operating points. Figure 11 . Figure 11.The performance of the proposed method on synthetic data.(a) Stator yoke temperature estimation result; (b) stator yoke temperature estimation error; (c) Wdg_R temperature estimation result; (d) Wdg_R temperature estimation error. Figure 14 .Figure 15 . Figure 14.Arrangement of thermal sensors at the winding.(a) U-shaped winding; (b) axial middle in slots; (c) welded winding.8 sensors for each layer at the U-shaped end and welding end; 5 sensors for the third layer at the axial middle in slots. 6. 2 . Validation Based on Measured Data 6.2.1.The Performance of The Proposed Method Figure 19 . Figure 19.The mean square error and max error on the testing set when providing different numbers of training data.The testing set remains unchanged.(a) MSE; (b) MAX. Figure 20 . Figure 20.Estimation results for stator tooth temperature.(a-d) are four different operating conditions. Table 1 . Parameters of the permanent magnet synchronous motor in this work. Table 2 . and Table2.Simplified thermal model (10 nodes).For better visualization, we only display the distribution of several thermal resistances.The meaning of each node in the simplified thermal model. * The temperatures of these nodes are known at each time. Table 5 . Ablation study based on measured data. Table 7 . The error of different methods. Table 8 . The errors on the testing set when different amounts of training data are provided.w/o values and w values mean the proposed method without and with simulation values. Table 9 . The estimation errors for stator tooth temperature.
8,202.6
2024-04-21T00:00:00.000
[ "Engineering", "Physics", "Computer Science" ]
The Impact of Macroeconomic Factors on Credit Risk in Conventional Banks and Islamic Banks: Evidence from Indonesia The banking sector is viewed as an important figure in the economic development. However, this sector is vulnerable to macroeconomic instability which lead to higher credit risk of banks. Motivated by the fact that unlike conventional banking, there is limited research study about the impact of macroeconomic factors on credit risk in Islamic banking. Therefore, further research to study about it has become important. This research aim is to evaluate the association between macroeconomic factors with credit risk through comparative analysis of these two kinds of banking system. Empirical results suggest that Islamic banks are more resistant during crisis, and only two variables (Exchange Rate and MS) which are significant to credit risk in Islamic banks. Meanwhile, in conventional banks almost all variables are significant except Industrial Production Index. This research is expected to give contributions to literature, and provide comparable information on the credit risk profile across the banks to stakeholders. Introduction Recently, Indonesia is much more vulnerable to any economic shocks toward trade, banking, and investment. According to Tambunan (2010) Indonesia experienced two big economic crises and one small crisis (Presidential secretariat office BPS and BI, 2015). The first happened in 1997/1998 was triggered by a sudden capital flight from the country that caused Indonesia's local currency, rupiah to depreciate significantly against the US dollar. Second, in 2007 there was a financial crisis started with the subprime mortgage in the United States. The impact of the crisis has been forced around 123 banks in the US, including Lehman Brother (Gup, 2010). Begin September 2008, it spread and has affected many countries (Marshall, 2009). Then, the latest happened in September 2015 devaluation of China's Yuan caused rupiah depreciate to IDR 14.054 per US dollar (Bloomberg Dollar Index). The macroeconomic instability during crises have the greatest impact toward banking and lead to the banking crisis. The worst impact of crisis to banking sector happened in 1998. Most of the banks at that time were considered sound got bankrupt due to adverse macroeconomic Nursechafia (2014). Therefore, it becomes a priority for the stakeholders to pay attention to keep the stability of the bank. In order to keep bank stable, it is important to manage the risk. Based on Carey (2001), risk management is the foundation of banking business because it allows the banks to offer services sustainably. One of the performance indicators to measure the stability level is Non Performing Loan (for conventional banks) and Non Performing Financing (for Islamic banks) since Indonesia is implemented under a dual banking system conventional and Islamic bank. From the risk management point of view, it is important for stakeholders (including investors and depositors) to know whether these two kinds of banking system exhibit different levels of credit risk with principle that the basis of the conventional banking system is interest, while Islamic banks utilizing the funds on a profit and loss sharing basis. However, despite the relevant coverage of literature about impact of macroeconomic variables on conventional banks' credit risk. There is relatively limited empirical research that has tested that impact on Islamic banking (e.g., Adebola (2011);Cihak & Hesse (2008) ;Havidz &Setiawan (2015); Al Waesabi (2013); Nursechafia (2014); Rahman & Shahimi (2010). A number of papers discuss risks in Islamic banking, but in theoretical terms instead of analysis of data. Hence, this study is expected to fill this gap in the literature. The purpose of this study is to find out whether Islamic banks and conventional banks have different level of credit risk by analyzed the association of macroeconomic indicators with credit risk in NPL and NPF using 91 conventional banks and 10 Islamic banks during the study period from 2008 to 2015. The structure of this paper is as follows. Section 2 presents a short overview of Islamic banking, and the hypotheses that used in this study. Section 3 presents the research methodology, and introduces the variables used in the paper. Section 4 compares the results of conventional and Islamic banks. Finally, section 5 summarizes the conclusion. Overview of Islamic Banking Islamic banking refers to a system banking, which is based on Islamic Shariah (law) and prohibits the payment and collection of riba (interest). According to Schacht (1964) on Lewis (2001) riba is simply a special case of unjustified enrichment or, in the terms of the Holy Qur'an, consuming (that is, appropriating for one's own use) the property of others for no good reason, which is prohibited. The main argument against the interest, according to the Institut Islamic Banking & insurance, money in Islam is not regarded as an asset from which it is ethically allowed to earn a direct return and money tends to be viewed purely as a medium of exchange. Interest can lead to injustice and oppression in society. Therefore, this concept encourages to invest the money to concrete projects and profit sharing instead of interest earned. The comparison between those two kinds of concept is presented in Table 1. The risk of loss will be borne by two parties (risk sharing) between investor and entrepreneur or between lender and borrower (according to contract) Determination of percentage of returns Based on the loan amount Based on the activities done to achieve some profit Payment As promised without consideration profit or loss Based on the increase of the total revenue Source: Antonio (2010) Credit risk is associated with each financial product which are provided by the banks. As a result, the nature of credit risk in Islamic banks differs markedly from conventional banks. Following are the main modes of financing from Islamic banks, according to Waseem (2014): 1. Mudaraba: a profit sharing partnership between "rabb al maal (capital provider)" and "mudarib (manager)". The profit shared among the two in a pre-agreed ratio, while the losses are borne exclusively by capital provider. The entrepreneur is covered by limited liability provisions. 2. Murabaha: a sales of goods contract which the payment include a profit margin agreed upon by the two parties. This product is predominantly offered by Islamic banks in asset financing, property, and commodity export and import. 3. Musharaka: a profit sharing partnership in the agreement to share profits with a pre-agreed ratio and share losses based on the ratio of contribution. Comparison between Conventional and Islamic Bank Indonesia is one of a country who implemented under a dual banking system in compliance with the Indonesian banking architecture. Conventional and Islamic banking system jointly and synergically supports a wider public fund mobilization in the framework financing capability of national economic sectors. Conventional banking is based on the principle of making profit from borrowing and lending rates of interest. Interest charged on a loan can be multiple of the principle, depending on the length of the loan period. In other words, the principle is contradictive with Islamic banking. Table 2 is presented to give an understanding about the comparison between those dual banking systems. The Impact of Macroeconomic on Credit Risk The relationship between macroeconomic indicators and credit risk has been widely discussed by connecting the financial vulnerability and stability of banking sectors. This study will attempt to highlight the impact of macroeconomic factors to credit risk (NPL and NPF). Some researcher believes that macroeconomic conditions are the main source of the systematic risk that reflects on the growth of or decline of loan default Touny & Shehab (2015). In addition, Ahmad & Bashir (2013) explain that during depression asset price kept as collateral will decline and results in growth of NPL. In order to prevent this kind of situation, it is important for government policy makers identify the macroeconomic factors to keep the bank's stability by ratio NPL and NPF. Empirical studies have confirmed the linkage between macroeconomic and stability of two kinds system banks. For instance, unlike conventional banking system, Islamic banking using profit and loss sharing instead of interest rate might exhibit different level of credit risk. A number of empirical studies have undertaken a comparative analysis of the credit risk of conventional and Islamic banking. Based on (Beck, Demirgüç-Kunt, & Merrouche, 2012); Rahim & Zakaria (2013); Ali (2013); and Ferhi (2015) conclude that Islamic banks are more stable than conventional banks. On the other hand, Elsiefy (2012) concludes that Islamic banking model implies a higher level of risk. Crisis A number of empirical studies have undertaken a comparative analysis of the credit risk of conventional and Islamic banking. Beck et al. (2012) investigate business orientation, efficiency, asset quality, and stability of 22 countries over 1995-2009. In asset quality aspect, the study finds that during crises, Islamic banks have significantly lower non-performing loans and loan loss provisions during crisis. This confirms previous studies Rahim & Zakaria (2013); Ali (2013); and Ferhi (2015). On the other hand, Elsiefy (2012) who investigated banking sector in Qatar using sensitivity scenario test find that Islamic banks appear to be more exposed to credit risk compared to conventional banks as the impact of credit quality would have been severer on Islamic banks as compared to conventional banks. In addition, Islamic banks assume have a higher credit risk post the global crisis in 2008 compared to before the crisis. Based on this explanation, the following hypotheses are proposed: Industrial Production Index Several studies in conventional banking have found that Industrial Production Index as a significant variable explaining credit risk. Ahmad & Bashir (2013) find that the industrial production index has a negative significance effect to NPL. This confirms previous studies by Kalirai (2002); Vatansever (2013); and (Festiae, Mejra, & Kaykler, 2011). In Islamic banking research area, the results are different among some researchers. Adebola (2011) revealed that Industrial Production Index has a positive relationship but insignificant to credit risk. However, Nursechafia (2014) who examine the same country with this current study, which is Indonesia in the period 2005 until 2012 argues that Industrial Production Index has a negative and significant relationship with credit risk. Nursechafia (2014) propose the reason for the negative relationship is the better condition of economic growth reflected an increase of Industrial Production index which leads to an increase of loan repayment capacity. As a result, credit risk in Islamic banks decreased. Thus, the above arguments lead to the following hypotheses: Bank Indonesia Certificate Rate/ Bank Indonesia Certificate Sharia Rate (SBIR/SBISR) For SBIR/ SBISR, this is the proxy of interest rate. Singjergji (2013) investigates the main macroeconomic variables in the Albanian banking find that there is a positive and significant relationship between interest rate and credit risk. This confirms previous studies Farhan (2012); Ahmad & Bashir (2013); and Bofondi (2011) indicate that interest rate affects the amount of bad debt in the case of floating interest rate. Therefore, the increase in the debt caused by the increase in payment of interest rates and result in the rise of non-performing loan. The above arguments are undertaking research in conventional banking. On the other hand, in Islamic banking research, the results show different impact. For instance, Al-Waesabi (2013) notice that interest rate do not statistically significant, but Adebola (2011) who investigate Islamic banking sector in Malaysia indicates the interest rate has a significant and positive impact with credit risk. Therefore, this argument suggests the development of the following hypotheses: Inflation There are two empirical evidence of the relationship between inflation and credit risk in conventional banks. It can be negative or positive. According to Khemraj (2009) finds that inflation were both positive and significant in affecting non-performing loan. Similar results are reported by Farhan (2012) (2015) and Al-Waesabi (2013) notice that inflation does not appear to be relevant to credit risk. In other words, conventional and Islamic banks show different impact of inflation to credit risk. Thus, the above arguments lead to the following hypotheses: Exchange Rate The impact of exchange rate on credit risk is closely related to export and import activity and give different direction based on some literature on research area of two kinds of banking system. Most of the studies on conventional banks find that exchange rate have a significant and positive impact to credit risk (see for example: Fofack (2005) (2014)). Fofack (2005) revealed the result of positive impact is due to the large concentration of loans to the export-oriented agriculture sector, exchange rate appreciation may limit growth prospects by squeezing profit margin which lead to higher NPL. Meanwhile, in Islamic banking, Čihák & Hesse (2008); and Nursechafia (2014) find a negative relationship between exchange rate and credit risk. Nursechafia (2014) implies that the majority of export activity in Indonesia still depend on imported tool and machinery. Thus, the depreciation value influences the import price increase, which impact to weaken the ability of the company to repay the loan. The above arguments support the view that a strong domestic currency hamper exports and make imports cheaper. Adversely, a weaker domestic currency stimulates exports and makes imports more expensive (Investopedia, 2016). Based on above discussion, the exchange rate have related to credit risk for islamic banks and the following hypotheses are proposed: H5-1: Exchange rate has a positive impact on credit risk of Conventional Banks. Money Supply (MS) In conventional banking research area, most of the studies find a positive impact of money supply to credit risk. Akinlo (2014) revealed that the increase in aggregate stock of money may have contributed to a deterioration of the bank's portfolio in a country and leads to an increase in non-performing loans. This confirms previous studies Badar (2013);and Fofack (2005). However, Nikolaidou (2011); and Touny & Shehab (2015) find a negative relationship because the increase in money supply will stimulate investment and consumption and consequently will increase the income and therefore increase the ability of debtors to meet their loan obligation. For Islamic Banking, Rahman & Shahimi (2010); and Nursechafia (2014) find that money supply is positively related to credit risk. Hence, the following hypotheses are proposed: H6-2: MS has has an impact on credit risk of Islamic Banks. Based on the literature review that discussed above, there is relatively limited academic papers that has tested the impact of macroeconomic variables on credit risk in Islamic banking. Therefore, this study expected to contribute to the existing literature on this topic. Unlike previous research, this study conduct in Indonesia which implemented dual banking system, conventional and Islamic banking system that is operates alongside in the industry. Thus enable to explicitly compare the two banking system by analyze the impact. Data Sample of Study The panel data has been used to conduct the empirical analysis on the determinants of credit risk of two kinds of banking system. The research sample comprised quarterly data from 91 conventional banks and 11 Islamic Banks, which are collected from the Indonesian Banking Statistics (SPI, Bank Indonesia) and Central Bureau of Statistic from 2008 until 2015. Model and Operational Definition of Variables This study uses OLS regression model to test the variables affecting credit risk. Moreover, the variables of the study are presented and explained in Table 3. Since it employs a different case study, this study separated into two main models, namely conventional banks model which uses NPL as dependent variable, and Islamic banks model which use NPF as dependent variable. Indicator reflects similar changes in overall economic activity (GDP). In other words, it represents a country's economic growth, which measure real production output. Industrial Production Index is more exhaustive compared to GDP in explaining economic growth in monthly basis. It covers broad sectors, namely, mining, manufacturing, and electricity. Investopedia (2016), Linda (2007) Ratio between the number of total loans to loans which the collectibility are classified as substandard, doubtful, and bad debt. NPF is the term for Islamic Bank (Bank Indonesia Regulation) Table 4 and 5 contain the descriptive statistic of some macroeconomic indicators and credit risk during the study period 2008-2015. The study sample demonstrated that conventional and Islamic banks experienced maximum NPL and NPF 51% and 18.07% respectively. These results indicate that both of the system banking exceed the maximum limit for credit risk. Based on Bank Indonesia regulation the banks should maintain the NPL and NPF at 5%. However, the mean of NPL and NPF are currently safe at 2.67% and 3.24% respectively. Furthermore, Table 6 and 7 show correlation matrix of explanatory variables of conventional and Islamic banks. Some variables show significant correlations. Spearman (Pearson) correlation coefficients are above (below) the diagonal of table Regression Analysis According to the data analysis of the Ordinary Least Square estimation the results are presented below (Gujarati, 2004) According to Table 8 and 9, Crisis shows different impact on the credit risk of these two kinds of banking system. In conventional banks, the relationship is positive and significant with the coefficient 0.591. In Islamic banks the coefficient is equal to 0.70 but no impact to credit risk. The results confirm hypotheses of H1-1 which in line with similar studies by Beck et al. (2012); Rahim & Zakaria (2013); Ali (2013);and Ferhi (2015) who have studied that Islamic banks are more resistant during crisis compared to conventional banks but rejected the hypothesis of H1-2. This may attributed to the principal and characteristic of financing in Islamic banks. The principal of PLS (profit and loss sharing) can help Islamic banks improve collectability efficiency because the allocation of funds depends on project's productivity instead of repaying the loan amount together with interest at the predetermined rate. As a result, customers of Islamic banks do not have to be worried to the change of interest rate. Moreover, the characteristic of financing of Islamic banks using the sharing of risk system. The risk of loss will be borne not only by one party, but also two parties (between investor and entrepreneur) according to the contract. Therefore, it can be considered as a main contribution for Islamic banks to be more resistant to the crisis than interest-based banks. As expected Industrial Production Index with credit risk in both of two banking system show negative impact, but insignificant. The results reject hypotheses of H2-1 and H2-2. Industrial Production Index is not good enough to explain the impact because this index only reflects the output indicator of manufacturing sector, and the fact is the portion of giving credit for manufacturing sector is relatively small. Therefore, Industrial Production Index shows insignificant impact to credit risk. SBIR is the proxy of the Interest Rate. Based on the result, SBIR has a positive and significant impact with the credit risk for conventional banks. These findings support hypothesis of H3-1 which equivalent to the findings of Singjergji (2013); Farhan (2012); Adebola (2011);and Ahmad &Bashir (2013). The theoretical justification for positive association is that the increase in SBIR will be followed by the increase in bank lending rates (Bank Indonesia, 2013). It means that the borrower's capability to pay the loan become weak and result in higher credit risk because in conventional bank, the lender is guaranteed of a predetermined rate of interest or return can lead to little attention in appraisal and evaluations of project. On the other hand, in Islamic bank SBISR show insignificant impact to credit risk due to the principal of Islamic banks in using the PLS system instead of interest, but this finding does not support hypothesis of H3-2. The theoretical justification for the insignificance is that the scheme of financing contract is different with conventional banking, for instance, the risk will be borne by two parties between a provider of capital (investor) and the use of funds (entrepreneur). Due to the risk sharing, the two parties will pay greater attention to developing project appraisal and evaluations instead of based on interest rate. This result is equivalent to finding Al-Waesabi (2013) who reveals that the interest rate is insignificant on credit risk in Islamic Banks. The inflation is negative and significant at the 1% level with credit risk for conventional bank which support hypothesis of H4-1. This may indicate that inflation not always give bad impact to stability of banks. Inflation during a short period of time will not reduce people's desire to fill their needs, then it means economic activity increase, then businessman can make profit. Therefore, increase the borrower's capability to pay the loan (Touny & Shehab, 2015). Similar results are reported by Ahmad & Bashir (2013); Shingjergji (2013); Babihuga (2007); Dash and Kabra (2010). In Islamic banks show that inflation is insignificant which reject hypothesis of H4-2. This may attributed to the fact that one of the financing in Islamic banking, for example, in murabaha financing (trade financing), the customer pays the original price plus a profit margin agreed upon by the two parties and the payment is fixed from the beginning to the end. Moreover, in partnership financing (mudharaba, musharaka) the profit is shared according to a pre-agreed ratio. Thus, inflation does not change the value of payment that customers have to be paid to the banks. As a result, when there is an increase of inflation, will not effect on the amount of payment. For Exchange Rate, conventional banks and Islamic banks show a positive and significant to credit risk which give greater impact to Islamic banks with coefficient 14.851. These results can be attributed to the appreciation of exchange rate give bad impact in weakened the ability to repay the loan especially those entrepreneurs who depend on export activity. The result support hypothesis of H5-1 for conventional bank, and consistent with the finding of Fofack (2005); Shingjergji (2013); Khemraj and Pasha (2009);Akinlo (2014). In Islamic banks the finding also support the hypothesis of H5-2. It is interesting that although the literature Nurschefia (2014) examine Islamic banks the same country with this study, but the sign of its impact is different. This may attributed to some reasons, first the sample period of the literature does not include the year period of 2013-2015 which actually at the period of 2015, the value of export is comparatively higher than import (Central Bureau of Statistic, 2016). Therefore, the increasing value of export and appreciation of the exchange rate may lead to higher credit risk. Furthermore, time series in the form of monthly data may show different result with this study which using panel data. Money Supply has a negative and significant impact with credit risk in conventional banks at the 1% level which is against hypothesis of H6-1. This means that when there is a high expansion in money supply will give good impact to credit risk. This finding is in line with similar studies Nikolaidou (2011); and Touny & Shehab (2015) suggest that money supply increase productivity and stimulates investment and consumption activity. Consequently, the income will increase and the ability to pay the loan will enhance. Furthermore, Islamic banks also show a negative and significant impact at the 10% level, but the result support the hypothesis of H6-2. Conclusion This study investigated the impact of macroeconomic factors on the credit risk of conventional banks and Islamic banks over 2008-2015. The study finds that Islamic banks are more resistant during crisis, and only two variables (Exchange Rate and MS) which are significant to credit risk in Islamic banks. Exchange Rate gives greater impact to Islamic banks than conventional banks. In addition, Money Supply has a negative and significant impact on credit risk at the 10% level. In conventional banks almost all variables are significant except Industrial Production Index (IPX). The finding of this study may serve as a reference for government policy makers to pay attention to bank' specific variables in Islamic bank to keep the stability of NPF since mostly of macroeconomic variables do not give significant impact. In addition, from the risk management point of view, it is important for stakeholders (including investors and depositors) a knowledge about different levels of credit risk in two kinds of banking system.
5,535.6
2016-06-26T00:00:00.000
[ "Economics", "Business" ]
Missing covariates in competing risks analysis Studies often follow individuals until they fail from one of a number of competing failure types. One approach to analyzing such competing risks data involves modeling the cause-specific hazards as functions of baseline covariates. A common issue that arises in this context is missing values in covariates. In this setting, we first establish conditions under which complete case analysis (CCA) is valid. We then consider application of multiple imputation to handle missing covariate values, and extend the recently proposed substantive model compatible version of fully conditional specification (SMC-FCS) imputation to the competing risks setting. Through simulations and an illustrative data analysis, we compare CCA, SMC-FCS, and a recent proposal for imputing missing covariates in the competing risks setting. INTRODUCTION In competing risks analysis, individuals are followed up until they "fail" from one of a set of possible causes of failure, e.g. cause-specific death. In such situations, it is often of interest to model how the hazard of failure from the different causes depends on a set of covariates recorded at cohort entry. Arguably, the most direct approach to analyzing competing risks data is to specify models for the cause-specific hazard functions (Andersen and others, 2002). A problem that arises in practice is that one or more covariates contain missing values. While extensive research has been conducted into missing covariates in the context of generalized linear models (Ibrahim and others, 2005) and the Cox model for single failure type data (Herring and Ibrahim, 2001;White and Royston, 2009), little has been done on competing risks. Recently, Escarela and others (2016) proposed a likelihood-based approach for handling incomplete covariates in competing risks analysis, based on models for the conditional survival distributions. They focused on the case of two partially observed discrete covariates, and developed a copula-based approach to model specification, under both missing at random (MAR) and missing not at random (MNAR) mechanisms (Rubin, 1976). The simplest and most commonly used approach to handling missing covariates is to fit models of interest excluding those with missing covariate values, in a so-called complete case analysis (CCA). In Section 3, we establish a condition under which CCA is valid, and discuss how the observed data can be used to assess compatibility with this condition. An increasingly popular approach for handling missing data is to use multiple imputation (MI), usually under the MAR assumption (Carpenter and Kenward, 2013). In Section 4, we describe recent proposals for imputing covariates in the competing risks setting using standard software. We then propose an approach that ensures covariates are imputed using models that are compatible with the analyst's specified cause-specific hazard models. We compare CCA with the MI approaches in simulations in Section 5. In Section 6, we apply CCA and MI to handle missing covariates in an analysis of data from the NHANES III study. We conclude with a discussion in Section 7. SETUP AND FULL DATA ANALYSIS We assume a sample of n independent individuals. For each, we observe vectors of time-independent baseline covariates X and Z . For the moment, we assume both are fully observed. For each individual, we assume the existence of a time to failure T and failure indicator D * ∈ {1, . . . , K }, where D * indicates the type of failure. As described by Prentice and others (1978), the basic estimable quantities in the competing risks setting are the cause-specific hazard functions. For cause k, the cause-specific hazard function is defined as Often the time to failure is censored, and so we further assume the existence of a time to censoring C for each individual. We observe Y = min(T, C) and D = 1(T < C)D * , which indicates either the observed cause of failure or that the individual is censored (D = 0). We assume that censoring is independent, in the sense that (T, D * ) ⊥ ⊥ C | (X, Z ). An individual's contribution to the likelihood function, conditional on X and Z , is then equal to where h 0 (t | X, Z ) denotes the hazard for the censoring process, given X and Z . When covariates are fully observed, as described by Prentice and others (1978), inference for a particular (say kth) cause-specific hazard function can proceed by using standard survival analysis procedures, treating both censoring events and failures from causes other than k as censored at their time of failure. A popular approach is to assume a Cox proportional hazards model where h k (t|X, Z ) denotes the cause-specific hazard function for cause k, h 0k (t) denotes the baseline hazard function for cause k, β k denotes a vector of cause-specific regression coefficients, and g k (·) denotes a known function, indexed by β k . The baseline hazard functions h 0k (t) can either be assumed to follow a parametric form or as is more commonly done in the absence of missing covariates, left arbitrary. In this case, as in Cox's proportional hazards model, the cumulative baseline hazard H 0k (t) = t 0 h 0k (u) du can be viewed as an infinite dimensional parameter. An alternative formulation of the competing risks problem involves postulating the existence of latent failure times for each cause of failure. This formulation and analyses based on it relies on strong untestable assumptions surrounding independence of competing risks (Prentice and others, 1978;Andersen and others, 2002), and so we do not pursue it further here. COMPLETE CASE ANALYSIS We now consider inference when X is partially observed (Z remains fully observed). We let R denote whether all components of X are observed (R = 1) or some are missing (R = 0). Without loss of generality, we assume interest lies in fitting a model for the first cause-specific hazard function. In CCA, we fit a model for this using only those individuals with X completely observed and who therefore have R = 1. In Appendix A of the Supplementary Materials (available at Biostatistics online), we show that this will be valid if R ⊥ ⊥ (T, D * ) | (C, X, Z ). This assumption encompasses both MAR mechanisms (e.g. missingness dependent only on Z ) and MNAR mechanisms (e.g. missingness dependent on X , or missingness dependent on C). In the special case of single failure type data (i.e. K = 1), Rathouz (2007) established sufficient conditions under which CCA gives valid inferences. Specifically, he showed that valid inferences are obtained if R ⊥ ⊥ (T, X ) | (C, Z ). We note that since single failure time data are a special case of competing risks with K = 1, our result extends that of Rathouz (2007) in that missingness in X can be dependent on X . This extension intuitively makes sense in light of the fact that CCA makes no distinction between which covariates are fully observed and which are partially observed in the full sample. A special case of the sufficient missingness assumption is when R ⊥ ⊥ (T, D * , C) | (X, Z ), in which case missingness in X is covariate dependent. As discussed by Bartlett and others (2014), such an assumption may sometimes be plausible when, as here, the covariates temporally preceed the outcome. This is because in order for R ⊥ ⊥ (T, D * , C) | (X, Z ), there would have to exist another baseline variable V which itself has an independent effect on (T, D * , C) and on R. As with the MAR assumption, in general, it is not possible to verify the assumption R ⊥ ⊥ (T, D * ) | (C, X, Z ) from the observed data. It is, however, possible to check whether the observed data are compatible with a stronger version of the assumption. Specifically, consider the stronger assumptions that R ⊥ ⊥ (T, D * , X ) | (C, Z ) and that X ⊥ ⊥ C | Z (this condition being unnecessary if there is no censoring). Then by ignoring the actual cause of failure, the results of Rathouz (2007) imply that: , and (4) T ⊥ ⊥ R | Z . One can then check whether the observed data are compatible with these implications of the stronger assumptions. Specifically, (1) implies one can check whether (2) holds by fitting a model for the hazard of censoring (treating failures as censoring events) conditional on X and Z within the complete cases. If the stronger assumptions hold, one should find that the hazard for censoring in this model does not depend on X (i.e. (2) is satisfied). Next, (3) implies that censoring is independent conditional on (R, Z ). Thus, (4) can be checked by fitting a model for the hazard of any failure (i.e. combining the failure types), conditional on R and Z . If (4) is satisfied, one should find that the hazard of any failure does not depend on R, conditional on Z . It is important to note, however, that if the observed data are not consistent with the implications of the stronger assumptions, this does not necessarily mean that the CCA is invalid. MI ASSUMING MAR As described in the introduction, MI assuming data are MAR is a commonly adopted approach for handling missing covariates. In this section, we first consider the plausibility of MAR. We then describe a recently proposed MI approach for the competing risks setting. Lastly, we propose an approach that imputes covariates from models which are compatible with the analyst's specified models for the cause-specific hazard functions. Plausibility of MAR For the moment, suppose that X is either scalar or a vector of covariates which is either entirely missing or entirely observed. The MAR assumption here means that R ⊥ ⊥ X | (Y, D, Z ). MAR is plausible if missingness in X is thought to be dependent on Z . Alternatively, if missingness depends on T and/or D * , then MAR holds in the absence of censoring (since then Y = T and D = D * ). However, if censoring is present, and missingness depends on T and/or D * , following the results of Rathouz (2007) for time-toevent data, MAR does not hold. Nevertheless, MAR is a useful assumption, since it enables information to be extracted from the incomplete cases, and provides a starting point for possible MNAR sensitivity analyses. Directly specified imputation models Imputation models are in practice almost always specified directly as conditional models for the incomplete variable(s), conditional on the fully observed variables. In the present context, this means directly specifying a model for f (X | Y, D, Z ). In the simpler context of incomplete covariates in survival analysis, White and Royston (2009) previously derived imputation models for incomplete covariates which are approximately compatible with a Cox proportional hazards model for the hazard of failure, assuming the latter contains main effects of X and Z . Specifically, they proposed that the incomplete X be imputed using an imputation model with Z , D (the binary event indicator) and the baseline cumulative hazard function, as covariates. A better approximation additionally includes interactions between Z and the baseline cumulative hazard function. Since the baseline cumulative hazard function is not available prior to analysis, they proposed its approximation by the Nelson-Aalen estimator of the marginal cumulative hazard function. Through simulations, they demonstrated that their approach gives estimates that typically have little or small bias, although larger biases can occur with strong covariate effects. Recently, Resche-Rigon and others (2012) proposed an extension of the results of White and Royston (2009) to the competing risks setting. Assuming Cox proportional hazards models for each cause-specific hazard, they showed using a Taylor series expansion that an approximately compatible imputation model for X uses Z , D (as a factor variable) and H 0k (Y ), k = 1, . . . , K as covariates. Resche-Rigon and others (2012) further showed that this approximation could be improved by including the interactions Z × H 0k (·), k = 1, . . . , K . Since the cumulative baseline hazard functions are not available prior to imputation, they proposed their approximation by the corresponding Nelson-Aalen estimates of the (marginal) cumulative cause-specific hazard functions. Simulation results suggested that the approach led to estimates with little bias, and confidence intervals with nominal coverage. They also demonstrated that applying the approach of White and Royston (2009) treating failures from competing risks which were not of primary interest as censoring, led to bias. When X is vector valued, and there are multiple missingness patterns, Resche-Rigon and others (2012) proposed using the fully conditional specification MI approach (van Buuren, 2007). The approach proposed by Resche-Rigon and others (2012) is attractive since it can be readily implemented using existing software for MI. A potential drawback, however, is that the imputation model used is only approximately compatible with the assumed models for the cause-specific hazard functions. It is, therefore, expected that in certain situations (e.g. large covariate effects), the approach may lead to estimates with appreciable biases. Moreover, as described in detail by Bartlett and others (2015), more generally it is difficult to choose directly specified imputation models for incomplete covariates that are compatible with outcome models when the incomplete covariates are assumed to have non-linear effects or interactions in the substantive model. These difficulties can, however, be overcome by constructing an imputation model that is compatible with the assumed models for the cause-specific hazard functions. Substantive model compatible covariate imputation Suppose for the moment that X is scalar, and is MAR. We further assume that for each cause-specific hazard function, a proportional hazards model conditional on X and Z has been specified, as given in equation (2.2). To ensure the imputation model for X is compatible with the substantive model, we note The first part of this is the likelihood contribution given by equation (2.1). Thus a substantive model compatible imputation distribution for X is, up to a constant of proportionality, equal to where we omit the terms corresponding to the censoring process on the assumption that If in a particular application such an assumption is deemed inappropriate, for example based on a preliminary model fit for the censoring process, this can be handled by treating censoring as an additional cause of failure and specifying a proportional hazards model for the censoring process conditional on X and Z . Thus, having specified models for the cause-specific hazards, the imputation distribution specification is completed by specifying a model f (X | Z , φ). The model for f (X | Z ) can be chosen to be an appropriate model depending on the variable type of X . For example, we may use linear, logistic, ordinal, or multinomial logistic regression models for continuous, binary ordered categorical, and unordered categorical variables, respectively. Count variables can be imputed using Poisson or negative binomial models. In Appendix B.1 of the Supplementary Materials (available at Biostatistics online), we describe how a Gibbs sampler can be constructed using this imputation approach, and give details about prior choice. In Appendix B.2 (see supplementary material available at Biostatistics online), we describe methods for sampling from the required conditional distributions. In practice, X is commonly vector valued, with multiple missingness patterns. In this case, a joint model could in principle be specified for X = (X 1 , . . . , X p ), and imputations be drawn from the posterior distribution of the missing data using a Gibbs sampler. One approach in this case is to factorize the joint distribution as a series of univariate conditional models, as proposed by Ibrahim and others (1999). Here, following the popular chained equations or fully conditional specification approach to MI, we instead adopt the substantive model compatible fully conditional specification (SMC-FCS) approach recently proposed by Bartlett and others (2015). Rather than specifying a joint model for f (X | Z ), this approach involves specifying, for each partially observed variable X j , a model f (X j | X − j , Z , φ j ), where X − j denotes the components of X except the jth. The partially observed X j are then imputed one at a time. Further details for the algorithm are given in Appendix B.3 of the Supplementary Materials (available at Biostatistics online). The SMC-FCS approach ensures that each partially observed variable is imputed from a model that is compatible with the substantive model, and at the same time permits flexibility since different model types can be specified for each f (X j | X − j , Z , φ j ), j = 1, . . . , p. A drawback of the SMC-FCS algorithm is that these models may themselves be mutually incompatible, such that the resulting sampler does not draw imputations from a well-defined Bayesian joint model. However, given recent theoretical developments regarding the properties of standard FCS MI (Liu and others, 2013;Hughes and others, 2014), we believe the possibility of such incompatibility may not be such a great practical concern for SMC-FCS, provided the models f (X j | X − j , Z , φ j ), j = 1, . . . , p fit well. SIMULATIONS In this section, we report the results of simulations to evaluate the performance of CCA and the MI approaches described previously. Values in X 3 were then made missing (at random) with probability 0.25 + 0.5X 1 , leading to 50% missing values. We imputed the missing values in X 3 using three different directly specified conditional imputation models for f (X 3 | X 1 , X 2 , T, D) using the R package MICE. First, following the results of Resche-Rigon and others (2012), X 3 was imputed using a normal linear regression imputation model, using the event indicator D as a categorical predictor, the Nelson-Aalen estimates of the (marginal) cumulative hazard functions (i.e. ignoring covariates),Ĥ N A1 (Y ) andĤ N A2 (Y ), and X 1 , X 2 as covariates (FCS competing). Secondly, we used an imputation model based on the more accurate approximation derived by Resche-Rigon and others (2012), by additionally including interaction terms between each of X 1 , X 2 and each ofĤ N A1 (Y ) andĤ N A2 (Y ) (FCS competing int.). Thirdly, to explore the impact of ignoring the second cause of failure at the imputation stage, we also imputed X 3 as if it were (single failure type) survival data, by treating failures from the second cause as if they were censorings when defining D and calculatinĝ H N A1 (Y ), and omittingĤ N A2 (Y ) from the imputation model (FCS survival). Note that here we did not include the interactions between X 1 , X 2 , andĤ N A1 (Y ). Next we imputed X 3 using the substantive model compatible approach described in Section 4.3, assuming (correctly here) that X 3 | X 1 , X 2 is normal linear regression, and assuming Cox models with linear covariate effects for both causes of failure (SMC-FCS competing). We then imputed again using the substantive model compatible approach, acting as if the data were single failure type data, considering failures only due to cause one (SMC-FCS survival). For all the imputation methods, five imputations were generated for each dataset. With each imputed dataset, we fitted Cox proportional hazards models for each cause of failure, and combined estimates of the two sets of regression coefficients β 1 and β 2 using Rubin's rules. Using each imputation, we also estimated the cumulative cause-specific hazard function for cause one at t = 0.5, and obtained standard errors using the R function survfit. These were similarly combined across the five imputations using Rubin's rules. Table 1 shows the results of the simulations. First, we note the considerable efficiency loss due to missing data as shown by the larger empirical SDs for complete case estimates compared with full data. In line with the results of Section 3, CCA is unbiased since missingness is covariate dependent. Estimates based on FCS MI, accounting for competing risks (FCS competing), showed moderately large biases for most parameters, and consequently low confidence interval coverage for some parameters. This can be attributed to the fact that the imputation model used is only approximately compatible with the cause-specific hazard models, and the baseline cumulative hazards are estimated by the marginal Nelson-Aalen cumulative hazard estimator. The estimate of the first cumulative baseline hazard function at t = 0.5 was also biased upward. Including interactions between the estimated cumulative hazard functions and X 1 , X 2 (FCS competing inter) reduced the biases considerably. Moreover, confidence interval coverage was improved, although for β 13 coverage was still poor. In line with the simulation results of Resche-Rigon and others (2012), performance was worse when the second cause of failure was treated as if it were censoring (FCS survival), with larger biases and lower confidence interval coverage. Estimates from SMC-FCS accounting for the competing risks showed little bias and confidence interval coverage close or slightly below the nominal 95% level. Of particular note, the cumulative baseline hazard function at t = 0.5 for the first cause of failure was estimated with little bias, and confidence intervals had only slight under coverage. Comparing empirical standard deviations, we see that SMC-FCS recovers considerable information for the coefficients of the fully observed covariates X 1 and X 2 , while for the coefficient of the partially observed X 3 there is no efficiency gain. As expected, imputing treating the second cause of failure as censoring (SMC-FCS survival) led to biased estimates and confidence interval coverage below the nominal level, particularly (as one might expect) for β 2 . In the FCS approaches, X 2 was imputed using logistic regression, conditioning on X 1 , X 3 and the event indicator and Nelson-Aalen cumulative hazard estimators as before. In "FCS competing inter" as before we included interactions between X 1 and the Nelson-Aalen cumulative hazard estimates, and similarly between X 2 (X 3 ) and the cumulative hazard estimates when imputing X 3 (respectively, X 2 ). Note, however, that no further modifications were made to attempt to allow for the X 2 X 3 interactions in the cause-specific hazard models, with these interaction values simply being passively imputed at the end in the final imputed datasets. In the SMC-FCS approaches, X 2 was imputed using a logistic model conditional on X 1 and X 3 , and the X 2 X 3 interactions were included in the cause-specific Cox models. The number of iterations for SMC-FCS was increased from its default of 10 to 20, since MCMC convergence plots of initial simulations suggested more than 10 were required for convergence due to the presence of the interaction term. Table 2 shows the results. The FCS approaches led to biased estimates and confidence intervals with very poor coverage for the interaction parameters because FCS (at least as implemented here) does not account for the interactions in the cause-specific hazard models. In contrast, SMC-FCS accounting for both competing causes led to valid inferences, while SMC-FCS treating the second cause as censoring as expected led to very biased estimates of β 2 (as expected), although biases for β 1 were smaller. Three sets of additional simulations are reported in Appendix C of the Supplementary Materials (available at Biostatistics online). In the first set, missingness was dependent on D, such that CCA was biased, while SMC-FCS gave valid inferences. In the second set, X 3 was made missing with missingness dependent on X 3 (MNAR), such that CCA was unbiased, while the MI approaches were biased. In the final set, missingness in X 3 was again dependent on X 1 , but with the hazard for the second failure type not dependent on X 3 . Here both SMC-FCS approaches were unbiased, with SMC-FCS survival being slightly more efficient. ILLUSTRATIVE ANALYSIS To illustrate the two MI approaches, we consider data from the third US National Health and Nutrition Examination Survey (NHANES III), which was conducted between 1988 and 1994. The overall study involved around 40 000 individuals, and consisted of an in-depth survey of their health and nutrition status, obtained from physical examinations and interview. Mortality status at the end of 2011 is available through linkage to the US National Death Index. Here we consider the subset of individuals aged between 60 and 70 at the time of the original survey, which consists of 2583 individuals. By the end of 2011, 1492 (57.8%) had died. Cause of death was classified using the ICD-10 system. For the illustrative analyses, here we focus on how the hazard for death due to cardiovascular disease (CVD) relates to the risk factors shown in Table 3. Here death due to CVD is of primary interest, and deaths due to other causes are competing causes. We categorize deaths as due to CVD, cancer, and other causes, separating out cancer as it represents a large proportion of deaths and may have quite different associations with the risk factors than other causes. There were 358 CVD deaths, 379 cancer deaths, and 755 deaths due to other causes. We assumed a Cox proportional hazards model for the hazard of death due to CVD, with main effects of each of the risk factors listed in Table 3, and assuming linear effects (on the log hazard scale) of continuous variables. The first column of Table 4 shows estimated log hazard ratios for each risk factor based on the 1106 (42.8%) complete cases. This shows statistically significant evidence for independent associations of each risk factor with hazard of death due to CVD, except for diabetes, with directions of association as expected based on the prior knowledge of CVD. A global test of the proportional hazards assumption using Schoenfeld residuals revealed no evidence ( p = 0.77) against the assumption. To investigate whether the CCA is valid, following Section 3, we first argue that the assumption that X ⊥ ⊥ C | Z is satisfied here because censoring is almost exclusively due to the length of available followup. Next we fitted a Cox model where events were taken as death from any cause, with fully observed sex, age, diabetes (dropping the three observations with diabetes missing) and an indicator R of whether the other risk factors were all available or not, as covariates. Unfortunately, this showed evidence ( p < 0.001) that being a complete case was associated with increased hazard of death, conditional on sex, age, and diabetes. The data are thus not consistent with an assumption that R ⊥ ⊥ (T, D * , X ) | (C, Z ). Nevertheless, the CCA may still be valid, if for example missingness in the partially observed covariates is dependent only on X and Z . This is arguably quite plausible for variables such as smoking and alcohol consumption. Next we applied the FCS and SMC-FCS approaches to multiply impute the missing covariate values, using 50 imputations for each method. As in the simulation study, we applied each either accounting for or ignoring (as censoring) failures from causes of death other than the one of interest (CVD). Table 4 shows the estimated log hazard ratios and corresponding standard errors. Estimates and standard errors were very similar across all four MI methods, suggesting that the approximations being made in the directly specified FCS approach are here quite reasonable. The MI standard errors were uniformly smaller than those from CCA, even for the coefficients of fully observed covariates. However, the MI estimates differed materially from the CCA estimates for some risk factors, such as gender, diabetes, and SBP. Unfortunately, we do not believe it is possible to establish here from the observed data whether the CCA assumption or MAR (or neither) is true. From considerations of the nature of the variables, a covariate-dependent MNAR missingness mechanism, under which CCA is valid, is arguably more plausible than MAR. DISCUSSION We have explored approaches for handling missing covariates in competing risks analysis when one is interested in modeling the cause-specific hazard functions. We have shown under what assumptions CCA is valid, and suggested how the observed data can be checked for compatibility with a stronger version of this assumption. Even when CCA is valid, it is however inefficient. Recently Bartlett and others (2014) developed an approach for improving upon the efficiency of CCA for conditional mean models when a covariate-dependent MNAR mechanism is assumed, and further work is warranted to extend this to survival and competing risks settings. Under an MAR assumption, we have proposed a flexible approach to multiply impute missing covariates in competing risks data, based on proportional hazards models for cause-specific hazards. The approach automatically handles user-specified covariate effects in these models, including interactions and nonlinear covariate effects. Through simulation we have demonstrated its good finite sample performance, for both the regression coefficients indexing models for cause-specific hazards and for estimation of the cumulative cause-specific baseline hazard functions. In contrast, we have empirically shown that directly specified approximately compatible imputation models in general lead to biased estimates. The SMC-FCS approach we have described relies on the analyst specifying appropriate models for the cause-specific hazard functions and the covariate models f (X j | X − j , Z , φ j ). The assessment of model fit in the context of MI approaches, or indeed when data are incomplete more generally, is challenging. In the present setting, we would recommend that analysts assess the fit of the covariate f (X j | X − j , Z , φ j ) models fitted to those corresponding complete cases. While these fits may themselves be biased (when missingness is not completely at random), if the model appears to fit well in the complete cases, it is arguably plausible that the models are reasonable for the entire sample. For the cause-specific hazard models, if missingness can be assumed to be at most covariate dependent, then again model assessment and selection could be applied to corresponding complete case fits prior to imputation of missing covariates. Alternatively, one could impute missing covariates using SMC-FCS, and then apply model diagnostics for the cause-specific hazard models to the imputed datasets. The obvious limitation with such a strategy is that the missing covariates will have been imputed assuming that the analyst's specified cause-specific models are correctly specified, which would be expected to weaken the potential to detect misspecification in the cause-specific hazard models. In the context of single failure time data, Qi and others (2010) found that using directly specified conditional MI methods for missing covariates gave estimates with large bias when the partially observed covariate was related to the censoring time. Our results explain their finding, and show that if X and C are related, the censoring process must be modeled as an additional competing risk when imputing missing covariates. Often in competing risks settings, primary interest will be in modeling the hazard of failure due to just one cause. In this case, in the absence of missing covariates, models need not be specified for the causes of failure which are not of interest. An advantage of CCA is that similarly a model need only be specified for the cause(s) of interest. In contrast, if missing covariates are imputed, models must be specified for these causes, (unless the analyst is willing to assume that the cause-specific hazards for the causes not of interest are unrelated to X conditional on Z ). In this situation, one must choose how to define the competing causes. At one extreme, all of the causes of failure that are not of interest could be combined to form a second cause of failure (in addition to the cause of interest). However, this may be statistically inefficient when the partially observed covariate(s) have different effects on the causes that have been combined. Moreover, if missingness in X is related to failure type, amalgamating the causes not of interest into a single cause may render the MAR assumption invalid, leading to biased estimates. A closely related approach to handling missing covariates is to fit a single Bayesian joint model, allowing for missingness in the covariates, as described in the case of single failure type data by Chen and others (2006). The strengths of such an approach are that one uses a coherent joint model for the data, and uses well-defined priors for all model parameters. However, with multiple partially observed variables, arguably specifying joint models becomes more challenging. Moreover, the Gibbs sampler developed by Chen and others (2006) is more involved than the SMC-FCS algorithm, and unlike SMC-FCS, is not currently available in software. A further alternative approach to handling missing data is based on inverse probability weighting (IPW). IPW and doubly robust estimators assuming MAR have been developed for the Cox model with single failure time data (Wang and Chen, 2001;Qi and others, 2010), and further work is warranted on extending these to the competing risks setting. Lastly, we note that an alternative approach to competing risks analysis is based on modeling covariate effects on the cumulative incidence function (Fine and Gray, 1999), and further research is similarly warranted to explore missing covariates within this framework.
7,771.4
2016-05-13T00:00:00.000
[ "Computer Science" ]
PV Module Parameters Estimation Using Newton Raphson The estimation of solar photovoltaic (PV) system with help of electrical model parameters, such as photon generated current, the diode saturation current, series resistance, shunt resistance, and diode ideality factor, are desirable to predict the real performance characteristics of solar PV under varying environmental conditions. Finally, performance indices, such as PV characteristics curve are estimated for the various solar PV panels, using Newton Raphson (NR) to reveal the effectiveness of the proposed method. Also, validation with `experimental data has been considered. Finally, through the comparative analysis of the results, it is revealed that the proposed method offers solar PV characteristics closer to the real characteristics. INTRODUCTION Solar panels harness the sun energy in the form of light and convert the energy into electricity. Although the average consumer might associate solar panels with residential rooftop assemblies, solar panels are available for a wide range of applications, including powering individual gadgets, electronic devices and vehicle batteries. The reserves of fossil fuels are rapidly decreasing at present due to the increased use of thermal power plants and air pollution associated with the combustion of fossil fuels is increasing. Hence, in the present scenario, there is an urgent need to speed up the research and development of renewable energy technology, especially solar energy, to meet the world energy demand. The goal of this dissertation is to develop and apply an integrated assessment framework, for one of the sustainable electricity options, solar photovoltaic (PV) technology. In this dissertation different types of photovoltaic modules are considered that are widely manufactured in the market at present, and the future implications of using PV technology in the electricity sector is evaluated. The word 'Sustainable' in this context implies energy, environmental and economic sustainability. Higher output energy generated by the PV panels during their lifetime when compared to the input energy for manufacturing and end of life management constitutes energy sustainability. Generating cleaner (lower criteria pollutants and greenhouse gas emissions released) electricity when compared to the grid electricity sources constitutes environmental sustainability. PV electricity mitigates emissions from thermal power plants to the grid. Inclusion of such monetary benefits from mitigation into the evaluation of the economic performance, PV technology encourage economic sustainability. ANALSIS OF SINGLE DIODE PV MODULE A single diode model of the solar PV module is have the unknown parameters from figure 1, namely Ilg, Isat, A, Rse, and Rsh. By taking the datasheet information provided by the manufacturer of the PV module at standard test conditions (STCs), the PV module's parameters are estimated. Single Diode Solar PV Module A single diode model of the PV module is shown in Figure 1. Using Kirchhoff's current law, the I − V relationship of the PV module can be written as follows [10,11]. By using the PV module parameters obtained at STCs, the values of the five parameters and the MPP of the PV module can be estimated at any temperature and irradiance condition. The important parameters to be noted from the manufacturer's datasheet are short circuit current (Isc), open circuit voltage (Voc), and maximum power point are (Vmpp & Impp). The values at STCs, for which the irradiance (Gstc) is 1000 W/m 2 and the cell temperature (Tstc) is 25 • C. The data sheet also provides temperature coefficients for short circuit current (ki), open circuit voltage (kv), and maximum power (kp). Extraction of PV Module Parameters The five unknown parameters are to be estimate of the PV module from the nonlinear equation (1), five independent equations are required. The first three equations, (4), (7), and (9), are derived from Eq. (1) by applying short circuit, open circuit, and MPP conditions. The remaining two equations, (12) and (14), are derived by differentiating the values of power and current with respect to voltage. Short Circuit Condition (SCC) Under the short circuit condition, After some approximation, the light generated current (Ilg) can be written as (4) Open Circuit Condition (OCC) Under the open circuit condition, This equation is rearranged and the reverse saturation current is expressed as Substituting Ilg from short circuit, the saturation current can be derived as, Maximum Power Point (MPP) Condition The maximum power point calculation are as follows: (8) Inserting and into this equation, we get the equation as Calculation of Initial Values To select the initial value for and the following equations are considered. Because of high sensitivity, the numerical methods may fail to converge due to improper selection of the initial value of the PV module parameters. The , and are given by the equations, Where, The PV module parameters can be obtained by open and short circuit test. First, these three equations are solved by newton Raphson method and the values of Vt, are obtained The remaining parameters are obtained from short and open circuit test using the values of Vt . Effect of Varying Irradiance and Temperature The light generated current and short circuit current are directly proportional to irradiance and depends on temperature. Where, Cell temperature at STC, K. The light generated current can be determined as a function of temperature and is given by, The short circuit current, open circuit voltage and light generated current can be estimated at any temperature and irradiance using the following three equations. The thermal voltage is directly proportional to PV panel cell temperature and is given by, The diode reverse saturation current, which is a function of irradiance and temperature can be calculated from the equation, Estimation of Maximum Power Point The proper initial values of should be chosen to estimate the accurate MPP by using the well-known values of under given operating conditions. Under varying irradiance and temperature, are obtained by using the estimated parameters of the PV module such as at STC's. In general, the PV module parameters change considerably due to various environmental conditions. The value of shunt resistance is considered a constant in [10] but the value of shunt resistance of the PV module is indirectly proportional to the short circuit current under varying operating conditions in this paper, anew equation is introduced for the shunt resistance variation with respect to temperature and irradiance and is expressed as In order to find MPP, The maximum voltage is found as a function of temperature and irradiance and is given by, Where, The maximum current serves as a function of temperature and irradiance and is given by, Where, These three equations can be solved using the Newton Raphson method and Generalised Hopfield Neural Network method. In these two method the error value is taken as the 1×10e-10. These are all the mathematical modelling of the 80W PV module that can be used for extracting the five parameters and finding the maximum power point of the prescribed PV module. ESTIMATION OF PV MODULE PARAMETERS The parameters of PV module are namely Ilg, Isat, A, Rse, and Rsh. By taking the datasheet information Table 1, provided by the manufacturer of the PV module at standard test conditions (STCs), the PV module's parameters are estimated. By using the PV module parameters obtained at STCs, the values of the five parameters and the MPP of the PV module can be estimated at any temperature and irradiance condition. PV Module Parameters Estimation Under STCs Using Newton-Raphson Method In this section about the solution of a set of nonlinear equations through Newton-Raphson method. Let us consider that a set of n nonlinear equations of a total number of n variables x1, x2, ... , xn. Let these equations be given by The square matrix of partial derivatives is called the Jacobian matrix J with J (1) indicating that the matrix is evaluated for the initial values of x2 (0) , ... , xn (0) . Then write the solution of (3.6) as Since the Taylor's series is truncated by neglecting the 2 nd and higher order terms, It cannot expect to find the correct solution at the end of first iteration. So that go for further iterations. Step 3: The condition Iter = maxiter satisfies go fot next step if not stop. Step 4: Evaluate values Vt(new),Rsh(new),Rse(new) & A. Step 5: Calculate error values 1*e-10. Step 7: It cannot expect to find the correct solution at the end of first iteration. So that go for further iterations Iter = Iter+1. The parameters are estimated in standard test conditions (STCs) and tabulated different PV modules of 250W solar panel KD245GX, HST60FXXXM, HST60FXXXP and 80W solar panels of U5-80, Shell SP70, HST36FXXXP. The N -R method for estimated five parameters of PV modules with help of equations (12),(13) & (13) in STCs are shown in Table 2. The light generated current and short circuit current are directly proportional to irradiance and also depend on temperature. The unknown parameters of PV module namely Ilg, Isat, A, Rse, and Rsh., can be estimated at any temperature and irradiance condition. PV modules of 250W solar panel KD245GX, HST60FXXXM, HST60FXXXP and 80W solar panels of U5-80, Shell SP70 are taken estimated parameters at any temperature and irradiance conditions are shown in Table 3. CONCLUSION In this work, the following PV modules are taken for estimate the five unknown parameters of KD245GX, U5-80, Shell SP70, HST60FXXXM, HST36FXXXP and HST60FXXXP. The N-R method is used to estimate the five unknown parameters of the PV modules at STCs. Here, good convergence is achieved in the N-R method during MATLAB coding, due to the selection of appropriate initial values from the series and shunt resistance equations. The SUR method is used to extract the MPP at different environmental conditions by considering the varying nature of shunt resistance, series resistance, and ideality factor. In particular, through the proposed equations of shunt resistance and ideality factor of the PV model, accurate MPP value is obtained. For a wide range of operating conditions, the MPP and the five unknown parameters of various PV modules are estimated. FUTURE SCOPE The proposed methodology estimate the five unknown parameters of the PV module at STCs and variable temperature and irradiance conditions. Five unknown parameters of the PV module can be obtained with the GHNN based optimization technique Solar power from the PV module the converter will help to get maximum power by taking PV module references of voltage and current. Instead of taking reference from the PV module, for different environment conditions five parameters are determined and can be used for generating pulse signal for driving converter for deliver maximum power from PV module.
2,431.8
2019-03-25T00:00:00.000
[ "Engineering", "Environmental Science", "Physics" ]
POST1/C12ORF49 regulates the SREBP pathway by promoting site-1 protease maturation Sterol-regulatory element binding proteins (SREBPs) are the key transcriptional regulators of lipid metabolism. The activation of SREBP requires translocation of the SREBP precursor from the endoplasmic reticulum to the Golgi, where it is sequentially cleaved by site-1 protease (S1P) and site-2 protease and releases a nuclear form to modulate gene expression. To search for new genes regulating cholesterol metabolism, we perform a genome-wide CRISPR/Cas9 knockout screen and find that partner of site-1 protease (POST1), encoded by C12ORF49, is critically involved in the SREBP signaling. Ablation of POST1 decreases the generation of nuclear SREBP and reduces the expression of SREBP target genes. POST1 binds S1P, which is synthesized as an inactive protease (form A) and becomes fully mature via a two-step autocatalytic process involving forms B’/B and C’/C. POST1 promotes the generation of the functional S1P-C’/C from S1P-B’/B (canonical cleavage) and, notably, from S1P-A directly (non-canonical cleavage) as well. This POST1-mediated S1P activation is also essential for the cleavages of other S1P substrates including ATF6, CREB3 family members and the α/β-subunit precursor of N-acetylglucosamine-1-phosphotransferase. Together, we demonstrate that POST1 is a cofactor controlling S1P maturation and plays important roles in lipid homeostasis, unfolded protein response, lipoprotein metabolism and lysosome biogenesis. Electronic supplementary material The online version of this article (10.1007/s13238-020-00753-3) contains supplementary material, which is available to authorized users. INTRODUCTION Cholesterol metabolism is a complicated yet highly regulated process composed of biosynthesis, uptake, transport, utilization, export and esterification (Luo et al., 2020). Cholesterol biosynthesis and uptake represent the inputs and are switched on when cellular needs are unmet, whilst they are shut down when cellular needs are surpassed. Cholesterol within the cell is dynamically transported across the plasma membrane (PM) and various organelle membranes for serving as the membrane constituent, a signaling molecule and a precursor to other biologically active molecules . A balanced interplay of these pathways is essential for normal cellular functions and human health. Deregulation of cholesterol metabolism can lead to many disorders including cardiovascular disease, neurodegenerative disease and cancers (Ikonen, 2006;Kuzu et al., 2016;Chen et al., 2019). One of the master regulators of cholesterol metabolism is sterol regulatory element-binding protein (SREBP) 2, which, through modulating expression of cholesterogenic enzymes and low-density lipoprotein (LDL) receptor (LDLR), governs cholesterol biosynthesis from acetyl-CoA and uptake from extracellular LDL particles. SREBP2 and the other two isoforms, SREBP1a and SREBP1c, are members of the SREBP family belonging to basic helix-loop-helix-leucine zipper transcription factors (Horton et al., 2002). SREBP is initially synthesized as a precursor protein with the N-and C-terminal ends facing the cytosol and two transmembrane segments spanning the endoplasmic reticulum (ER). Upon cholesterol depletion, SREBP and the associated SREBPcleavage activating protein (SCAP), with the help of Cideb, rapidly translocate from the ER to the Golgi apparatus (Su et al., 2019). At the Golgi, SREBP is cleaved by site-1 protease (S1P) in the lumenal loop followed by a second cleavage by site-2 protease (S2P) within the membranespanning domain . This liberates the N-terminal fragment that enters the nucleus and activates the transcription of genes controlling cholesterol biosynthesis and uptake, thereby restoring cellular cholesterol levels. A corollary of the SREBP activation model is that factors involved in ER exit, ER-to-Golgi transport and Golgi tethering of the SREBP precursor, as well as those in generation of the nuclear form of SREBP (n-SREBP) can critically regulate SREBP signaling. For examples, under cholesterol repletion conditions, INSIGs, ERLINs and TRC8 are induced to bind and retain the SCAP/SREBP complex in the ER, thereby inactivating the SREBP pathway (Irisawa et al., 2009;Huber et al., 2013;Brown et al., 2018). By contrast, AKT and PAQR3 positively regulate the SREBP pathway by promoting anterograde trafficking and Golgi anchoring of the SCAP/ SREBP complex, respectively (Du et al., 2006;Xu et al., 2015). Compared with the above mechanisms controlling localization of the SREBP precursor, how its cleavage at the Golgi is regulated is less clear. S1P (also known as subtilisin kexin isozyme-1) is a serine protease of the subtilisin/kexin proprotein convertase family (Seidah and Prat, 2012). The newly synthesized S1P is an inactive type I transmembrane precursor protein (pro-S1P) and requires multiple proteolytic events to become fully mature. Pro-S1P is first cleaved by a signal peptidase as it translocates into the ER lumen. This exposes the N-terminal prodomain that undergoes autocatalytic processing at RKVF 133 ↓RSLK 137 ↓, RRAS 166 ↓ and RRLL 186 ↓ sequentially Espenshade et al., 1999;Elagoz et al., 2002;da Palma et al., 2014). The cleaved prodomain fragments remain associated with the rest of protein to assist correct folding while retaining enzymatic activity (da Palma et al., 2014;da Palma et al., 2016). Of all three S1P forms generated, only the completely processed protein can reach the Golgi where it acts on the SREBP precursor (Sakai et al., 1998;Espenshade et al., 1999). Despite the understanding of S1P autoprocessing, little is known whether and how, if any, this process is regulated. In the present study, we use a genome-wide CRISPR/ Cas9 knockout (KO) screen to search for new regulators involved in cholesterol homeostasis. An uncharacterized gene C12ORF49 is found to be tightly correlated with the lower PM cholesterol level in our screen. We further demonstrate that C12ORF49 interacts with S1P and affects cholesterol metabolism by promoting S1P maturation. Hence, we rename C12ORF49 the partner of site-1 protease (POST1) to reflect its biological function. Depletion of POST1 reduces S1P-mediated proteolytic cleavage of SREBP2 and other S1P substrates. These results reveal POST1 as a newly identified factor of the SREBP pathway and S1P maturation. Genome-wide screen identifies that POST1 regulates cholesterol homeostasis We first set out to identify new regulators of cellular cholesterol homeostasis using a genome-scale CRISPR/ c Figure 1. Genome-wide screen identifies that POST1 is involved in cholesterol metabolism. (A) Schematic representation of the screening strategy. HeLa cells stably expressing Cas9-Flag were transduced with lentivirus expressing a genome-wide sgRNA library and then treated with puromycin (Puro) for 4 days. Surviving cells were depleted of cholesterol by incubating in the medium containing 5% lipoprotein-deficient serum (LPDS) plus 10 µmol/L mevalonate and 1 µmol/L lovastatin for 16 h. Cells were then incubated with 50 µg/mL LDL for 4 h followed by 300 µg/mL amphotericin B (AmB) for 1 h. AmB could bind PM cholesterol, form pores and kill normal cells. The mutant cells defective in the SREBP-LDLR axis or cholesterol trafficking were resistant to AmB because of less PM cholesterol. After five rounds of challenges, the sgRNA inserts from surviving cells and those from transduced cells prior to the first challenge were amplified and subjected to deep sequencing. (B) Scatter plot showing 115 highly enriched genes (Supplementary Material, Table S1) in (A). Genes with a phenotype value (fold change [log 2 ]) >1 and P-value < 0.001 are in blue (except for POST1 in red) and are shown in smaller scales of xand y-axes (inset). Those with a phenotype value <1 are in gray. (C) HeLa cells and two lines of POST1 KO cells generated by the CRISPR/Cas9 technique (POST1 KO-1# and POST1 KO-2#) were depleted of cholesterol for 16 h. Cells were then incubated in the medium containing 5% LPDS, 50 µg/mL LDL and 1 µmol/L lovastatin in the absence or presence of 2 µg/mL U18666A for 4 h, and then in 300 µg/mL AmB for 1 h. (D) The predicted topology of human POST1 protein. (E) HeLa cells were transfected with pCMV-POST1-EGFP (green) and pCMV-DsRed2-KDEL (red) for 48 h, and immunostained with the antibody against GM130 (magenta). Boxed areas are shown at a higher magnification as numbered below. Scale bar, 10 µm (main), 1 µm (inset). (F) HeLa cells were transfected with pCMV-POST1-EGFP for 48 h and harvested. Lysates were treated with 10 units/μL Endo H (Sanjana et al., 2014). A four-day puromycin selection followed to allow the untransduced cells to be all killed. Surviving cells were deprived of cholesterol by incubating in the cholesterol-depletion medium containing lipoprotein-deficient serum plus lovastatin for 16 h. This condition activates the SREBP pathway so that LDLR expression is highly induced (Goldstein and Brown, 2009). Cells were then exposed to LDL and treated with amphotericin B (AmB), an antibiotic that binds cholesterol in the PM, forms pores and causes cell death (Wei et al., 2017). We reasoned that the cells with normal SREBP activation and cholesterol trafficking machineries could upregulate LDLR expression, take up exogenous LDL and rapidly redistribute cholesterol towards the PM by several mechanisms (Chu et al., 2015;Infante and Radhakrishnan, 2017;Luo et al., 2017Luo et al., , 2019Xiao et al., 2019), thereby failing the subsequent AmB selection due to PM leakage induced by AmB. By contrast, cells defective in either SREBP-LDLR axis or cholesterol trafficking had less PM cholesterol and were resistant to AmB treatment. To ascertain that AmB selection was stringent enough to separate defective cells from the normal ones without inducing general cytotoxicity, we subjected untransduced HeLa/Cas9-Flag cells to a parallel "cholesterol depletionrepletion-AmB selection" challenge except that U18666A, which binds NPC1 and blocks lysosomal cholesterol export (Lu et al., 2015), was added together with LDL. Cells without AmB exposure and those treated with AmB alone were used as controls. Indeed, AmB-induced cell death was effectively rescued by U18666A (Fig. S1A). After 5 rounds of challenges, the sgRNA inserts from surviving cells and those from transduced cells prior to the first round of challenge were amplified and subjected to deep sequencing. Candidate genes were identified using MAGeCK (Li et al., 2014). Those with at least 2 gRNA hits were selected and ranked by LFC (log 2 fold change). The LFC cutoff value was set to >0. A total of 115 genes were found highly enriched in the cells survived 5 rounds of challenge ( Fig. 1B; Table S1), among which included NPC1, SCAP and LDLR, the wellestablished regulators of cholesterol trafficking and metabolism. Specifically, loss of NPC1 causes cholesterol accumulation in lysosomes whilst loss of SCAP impairs activation of the SREBP pathway. LDLR is a gatekeeper of LDL uptake as well as a target of SREBP2. Intriguingly, we also detected robust enrichment of an uncharacterized gene C12ORF49, to which we referred as partner of site-1 protease (POST1) owing to its functional association with S1P (See below). To confirm that POST1 is indeed a critical factor for cholesterol homeostasis, we generated two independent POST1 KO cell lines using the CRISPR/Cas9 technique followed by the cholesterol depletion-repletion-AmB selection challenge in the presence or absence of U18666A. Compared with wild-type (WT) cells that survived only when U18666A was added to AmB, POST1 KO cells showed markedly improved resistance to AmB even in the absence of U18666A (Fig. 1C). Overall, these results support a positive role of POST1 in regulating PM cholesterol level. Human POST1 is a small protein of 205 amino acids. It is predicted to contain a cytosolic segment (1-16 aa), a transmembrane segment (17-36 aa), and a stretch of 169 amino acids extending into the lumen (Fig. 1D). To determine the subcellular location of POST1, HeLa cells were transfected with the plasmids encoding enhanced green fluorescent protein (EGFP)-tagged POST1 and DsRed-tagged KDEL (Lys-Asp-Glu-Leu), an ER retention motif, and then immunostained with the Golgi marker GM130. We detected robust POST1 staining colocalized with GM130 and modest signal colocalized with DsRed-KDEL (Fig. 1E). POST1 contains a potential N-linked glycosylation site (Fig. 1D). The transfected POST1 protein was partially sensitive to endoglycosidase H (Endo H), but completely shifted to a lower position by peptide N-glycosidase F (PNGase F) (Fig. 1F). These results suggest that POST1 resides in both the ER and Golgi. Cells were cultured in the medium containing 10% fetal bovine serum (FBS), or the depletion medium (5% lipoprotein-deficient serum plus 10 µmol/L mevalonate and 1 µmol/L lovastatin), or the depletion medium supplemented with 25-hydroxycholesterol (25-HC) for 16 h. The mRNA expression levels were normalized to those of HeLa cells in FBS condition. Colors indicate the gene expression range with the least expression in blue and highest expression in red. (B) Quantitative real-time PCR analysis of HeLa and HeLa/POST1 KO cells under different culture conditions. Data were normalized to HeLa cells in FBS condition and presented as mean ± SD (n = 3 independent trials). (C) Immunoblot analysis of HeLa and HeLa/POST1 KO cells under different culture conditions. ACC1, acetyl-coA carboxylase 1; CYP51A1, cytochrome P450 family 51 subfamily A member 1; FASN, fatty acid synthase; FDFT1, farnesyl-diphosphate farnasyltransferase 1; HMGCR, 3-hydroxy-3-methylglutaryl-CoA reductase; HMGCS1, 3-hydroxy-3-methylglutaryl-CoA synthase 1; INSIG, insulin-induced gene; LDLR, low-density lipoprotein receptor; LSS, lanosterol synthase; SCAP, SREBPcleavage activating protein; SCD1, stearoyl-CoA desaturase 1; SQLE, squalene epoxidase. RESEARCH ARTICLE Jian Xiao et al. POST1 is engaged in the SREBP pathway To gain insights into the mechanism by which POST1 governs cholesterol homeostasis, we treated WT and POST1 KO cells with three different conditions, namely (1) normal culture medium containing fetal bovine serum (FBS), or (2) cholesterol-depletion medium, or (3) cholesterol-depletion medium plus 25-hydroxycholesterol (25-HC), and then performed whole-transcriptome-sequencing (RNA-seq). The transcriptome data of WT cells exposed to FBS was used as a reference. In both WT and POST1 KO cells, the expression of SREBP2 target genes (cholesterol metabolism) and SREBP1c target genes (fatty acid metabolism) was markedly elevated upon cholesterol starvation ( Fig. 2A; Table S2). However, these increases were much moderate when POST1 was ablated. 25-HC as a potent inhibitor of the SREBP pathway abrogated the upregulation of lipid metabolism-related genes caused by cholesterol depletion. The expression profiles of genes involved in cholesterol and fatty acid metabolism were further verified using quantitative realtime PCR and immunoblotting ( To investigate whether POST1 is directly involved in SREBP processing, we prepared a plasmid that encodes full-length SREBP2 with a Flag epitope tag at the N terminus (Flag-SREBP2). This allows tracing of SREBP2 under both cholesterol-rich and cholesterol-depleted conditions, the latter of which induces SREBP2 to liberate the N-terminal fragment (n-SREBP2) that translocates into the nucleus. WT and POST1 KO cells were transfected with Flag-SREBP2 and challenged with varying levels of cholesterol, and the subcellular localization of SREBP2 was examined using the anti-Flag antibody. WT cells cultured in 10% FBS showed an ER distribution of SREBP2 (Fig. 3A, top row; Fig. 3C and 3D), whereas those deprived of cholesterol had robust staining in the nucleus (Fig. 3A, middle row; Fig. 3C). 25-HC blocks the ER-to-Golgi transport of SREBPs (Radhakrishnan et al., 2007), and SREBP2 was mainly found in the ER of WT cells exposed to 25-HC (Fig. 3A, bottom row). With respect to POST1 KO cells, SREBP2 was predominately localized in the ER in the presence of ample cholesterol ( Fig. 3C and 3D). In line with these immunostaining results, n-SREBP2 was evident in WT cells depleted of cholesterol but barely detectible in POST1 KO cells (Fig. 3E). We generated three independent lines of POST1 KO cells and found markedly reduced n-SREBP2 compared with WT controls (Fig. 3F and S1B). Further, knockdown of POST1 using small interfering RNA (siRNA) hampered cholesterol depletion-induced SREBP2 cleavage to a similar extent as knockdown of SCAP ( Fig. 3G and S1C). Together, these results establish POST1 as a key regulator of SREBP processing. POST1 promotes autocatalytic cleavage of S1P We next sought to investigate the molecular mechanism by which POST1 regulates the SREBP pathway. HeLa cells stably expressing POST1-Flag were generated and the potential binding proteins of POST1 were identified using coimmunoprecipitation coupled to tandem mass spectrometry c Figure 3. Ablation of POST1 decreases cholesteroldepletion-induced cleavage of SREBP2. (A and B) Confocal images showing the subcellular localization of transfected SREBP2 in HeLa (A) and HeLa/POST1 KO (B) cells under different culture conditions. Cells were transfected with pCMV-3×Flag-SREBP2 and pCMV-SCAP for 48 h, and incubated with the indicated medium for 16 h. Cells were immunostained with the antibodies against Flag (red) and GM130 (white). Nuclei were counterstained with DAPI (blue). Scale bar, 10 μm. (C and D) Percentages of SREBP2 intensity in the nucleus (C) and Golgi (D) normalized to the total SREBP2 intensity in (A) and (B). Data are presented as mean ± SD (10 cells/trial; 3 independent trials). One-way ANOVA with Tukey HSD post hoc test. *P < 0.05, ***P < 0.001, ns, no significance. (E) HeLa and HeLa/POST1 KO cells were depleted of cholesterol for 16 h and incubated with the indicated media for 16 h. Cells were treated with 25 μg/mL ALLN for 1 h prior to harvesting. Membrane fractions and nuclear extracts were prepared as described in Methods, and endogenous SREBP2 precursor was analyzed by the 1D2 antibody. Pre, precursor; n, nuclear. (F) HeLa and three different lines of HeLa/POST1 KO cells were treated as described in (E), and whole cell lysates were subjected to immunoblotting. Pre, precursor; n, nuclear. CHC, clathrin heavy chain. (G) HeLa cells were transfected with the indicated siRNAs for 48 h and cultured in the FBScontaining or cholesterol-depletion medium for 16 h. Cells were treated with 25 μg/mL ALLN for 1 h prior to harvesting. Whole cell lysates were subjected to immunoblotting. Pre, precursor; n, nuclear. (MS/MS). HeLa cells stably expressing Flag alone were used as a negative control. Analysis of the MS/MS results from three independent experiments revealed a set of highly enriched proteins including S1P (encoded by MBTPS1) (Fig. 4A; Table S3). The POST1-S1P interaction was confirmed by the co-immunoprecipitation assay (Fig. 4B). S1P is synthesized as an inactive precursor whose domain organization is shown in Fig. 4C. To become mature, S1P undergoes multiple processing of the N-terminal prodomain involving the catalytic triad D218/H249/S414, generating three shortened forms designated S1P-A, S1P-B'/B and S1P-C'/C Elagoz et al., 2002;da Palma et al., 2014). Indeed, immunoblotting analysis of HeLa cells transfected with a plasmid encoding S1P fused with a C-terminal Flag tag (S1P-Flag) showed three bands corresponding to differently cleaved forms of S1P (Fig. 4D). Co-expression of POST1 increased S1P-C'/C production and eliminated S1P-B'/B (Fig. 4D). We next evaluated the impact of POST1 on processing of transfected S1P in the cells lacking the endogenous counterpart (Fig. S1D). As in the WT cells, POST1 promoted S1P-C'/C generation at the expense of S1P-B'/B as well (Fig. 4E, compare lanes 1 and 2). No S1P autoprocessing was detected when enzymatically inactive mutant (EM, D218A/H249A/S414A) was expressed alone (lane 3) or together with POST1 (lane 4), suggesting that S1P enzymatic activity is a prerequisite for its self-cleavage regardless of the presence of POST1. The B'/B cleavage site mutant (BM, R130E/R134E) of S1P failed to yield S1P-B'/B and S1P-C'/C (lane 5), whereas the C'/C mutant (CM, R163E/R164E/R183E/R184E) could generate S1P-B'/B but failed to yield S1P-C'/C (lane 7). These results are consistent with the earlier work da Palma et al., 2014), and suggest that S1P cleavage at the B'/B sites is required for subsequent cleavage at the C'/C sites. However, we observed S1P-C'/C, albeit in small amounts, in S1P-KO cells co-transfected with S1P B'/B mutant and POST1 (lane 6), suggesting that POST1 may aid S1P autoprocessing bypassing the B'/B sites. By contrast, POST1 had no effect on the autoprocessing of S1P with C'/C mutations (lane 8). It should be noted that S1P used in the above experiments had a Flag tag at the C terminus, which provided limited information on the self-cleavage steps occurring within the N-terminal prodomain. Therefore, to examine whether POST1 facilitates generation of S1P-C'/C directly from S1P-A, we prepared plasmids that encode two HAtagged versions of S1P-Flag, designated HA-1 and HA-2. The HA epitope tag was inserted in the prodomain between A and B'/B sites (prodomain I) in HA-1, and between B'/B and C'/C sites (prodomain II) in HA-2 (Fig. 4F). S1P-Flag, HA-1 or HA-2 was transfected into HeLa cells alone or in combination with POST1, and S1P autoprocessing was analyzed by immunoblotting. The anti-Flag blot (Fig. 4F, the 1st blot) showed that POST1 increased S1P-C'/C formation when co-expressed with S1P-Flag, HA-1 or HA-2. However, in the anti-HA blot, we detected S1P-A and a 14-kDa band corresponding in size to the prodomain I in the cells transfected with the HA-1 plasmid, but A and B'/B forms in the cells transfected with the HA-2 plasmid (lanes 3 and 5 of the 2nd and 3rd blots). These results are in accordance with the canonical cleavage event in which S1P-A is converted to S1P-B'/B and then to S1P-C'/C. Theoretically, the prodomain II generated from HA-2 should also be visible, and we attribute its absence to the small protein size (∼5 kDa). Upon POST1 co-expression, a band corresponding to the prodomain (I+II) appeared in the cells expressing HA-1 or HA-2 (lanes 4 and 6 of the 3rd blot). These results support the notion that POST1 promotes S1P-A cleavage at the C'/C sites. In addition, S1P-B'/B was dramatically decreased when POST1 was co-expressed (compare lanes 5 and 6 of the 2nd blot), indicating POST1 also promotes S1P-B'/B cleavage at the C'/C sites. Together, we propose that POST1 accelerates generation of mature S1P-C'/C from S1P-B'/B via a canonical cleavage as well as from S1P-A directly via a non-canonical cleavage (Fig. 6I). We next investigated the effect of POST1 on S1P subcellular location using HeLa cells transfected with S1P-Flag alone or in combination with POST1-EGFP. In the absence of POST1, the majority of S1P resided in the ER and only about 12% of S1P was found colocalized with GM130 ( Fig. 5A and 5B). Co-expression of POST1 caused a greater than 5-fold increase in S1P localization to the Golgi complex ( Fig. 5A and 5B). Notably, mutations in enzymatic activity (EM) and B'/B or C'/C cleavage sites (BM and CM) severely impaired POST1-induced translocation of S1P from the ER to the Golgi (Fig. 5C and 5D). As these mutants (EM, BM and CM) produced little or no S1P-C'/C (Fig. 4E), it is concluded that POST1 facilitates the generation of S1P-C'/C RESEARCH ARTICLE Jian Xiao et al. that is subsequently transported to the Golgi. Co-expression of S1P and POST1 dramatically promoted the nuclear localization of SREBP2 ( Fig. S2A; Movie S1). To address where POST1-stimulated S1P processing occurs, we prepared plasmids encoding POST1 with a C-terminal EGFP followed by a KDEL or KDAS (Lys-Asp-Ala-Ser) tetrapeptide sequence. The KDEL tail is supposed to confer constitutive ER localization of POST1-EGFP, whereas the KDAS tail should be non-functional and serves as a negative control. To validate the localization of POST1-EGFP, POST1-EGFP-KDEL and POST1-EGFP-KDAS, lysates from cells transfected with various plasmids were treated with Endo H or PNGase F. As expected, only POST1-EGFP-KDEL was completely sensitive to Endo H, and the other two proteins were partially resistant to Endo H (Fig. S2B). These results suggest that all POST1-EGFP-KDEL proteins reside in the ER, and the other two proteins are present in both ER and Golgi. We transfected the plasmids encoding S1P-Flag and different versions of POST1-EGFP into HeLa cells, and found that both KDEL-or KDAStagged POST1-EGFP could facilitate S1P-C'/C production as the WT version did (Fig. 5E). Notably, unlike the Golgi localization of S1P in the cells expressing WT and KDAStagged POST1-EGFP, S1P was mainly retained in the ER when co-expressed with POST1-EGFP-KDEL ( Fig. 5F and 5G). These results indicate that POST1 promotes selfcleavage of S1P in the ER and that the generated S1P-C'/C still binds to POST1. POST1 is critical for proteolytic activation of other S1P substrates SREBPs are among many membrane-bound transcription factors cleaved by S1P. We next sought to test where POST1 can affect proteolysis of other cellular substrates of S1P. Activating transcription factor 6 (ATF6) and cAMP response element-binding protein (CREB) 3 family S1P-3×Flag (red) alone or in combination with pEGFP-N1-POST1 (green) for 48 h. Cell were fixed and immunostained with the antibodies against Flag and GM130 (white). Nuclei were counterstained with DAPI (blue). Scale bar, 10 μm. (B) Percentages of S1P intensity in the Golgi normalized to the total S1P intensity in (A). Data are presented as mean ± SD (10 cells/trial; 3 independent trials). Unpaired two-tailed Student's t-test. ***P < 0.001. (C) HeLa cells were co-transfected with pCMV-POST1-EGFP (green) and pCMV-S1P-3×Flag variants (red) for 48 h. Cell were fixed and immunostained with the antibodies against Flag and GM130 (white). BM, B'/B cleavage site mutations; CM, C'/C cleavage site mutations; EM, enzymatic site mutations. The S1P variants were illustrated in Fig. 4C. Scale bar, 10 μm. (D) Percentages of S1P intensity in the Golgi normalized to the total S1P intensity in (C). Data are presented as mean ± SD (10 cells/trial; 3 independent trials). One-way ANOVA with Tukey HSD post hoc test. ***P < 0.001. (E) HeLa cells were transfected as indicated for 48 h and subjected to immunoblotting. (F) HeLa cells were co-transfected with pCMV-S1P-3×Flag (red) and different variants of pCMV-POST1-EGFP (green) for 48 h. Cells were fixed and immunostained with the antibodies against Flag and GM130 (white). Scale bar, 10 μm. (G) Percentages of S1P intensity in the Golgi normalized to total S1P intensity in (F). Data are presented as mean ± SD (10 cells/trial; 3 independent trials). One-way ANOVA with Tukey HSD post hoc test. ***P < 0.001, ns, no significance. Jian Xiao et al. members including CREB3L3 (also called CREBH) are ERresident proteins that respond to stimuli by trafficking to the Golgi where they are sequentially cleaved by S1P and S2P (Ye et al., 2000;Zhang et al., 2006). ATF6 is a key player in unfolded protein response and CREBs regulate a wide array of genes involved in lipoprotein metabolism, collagen assembly, bone development and others. Figure 6A and 6B showed the processing of endogenous ATF6 and transfected CREB3L3 in WT and POST1 KO cells exposed to thapsigargin for various periods. POST1 KO cells had less amounts of the cleaved nuclear form of ATF6 (n-ATF6) and CREB3L3 (n-CREB3L3) than WT cells, suggestive of impaired cleavage of ATF6 and CREB3L3 in the absence of POST1. S1P also regulates lysosomal biogenesis by cleaving the α/β-subunit precursor of N-acetylglucosamine (GlcNAc)-1phosphotransferase, a key enzyme responsible for modifying newly synthesized lysosomal enzymes with mannose 6-phosphate (M6P) residues (Marschner et al., 2011). The α and β subunits are encoded by a single GNPTAB gene (Tiede et al., 2005). As shown in Fig. 6C, deficiency of POST1 largely inhibited the cleavage of the α/β-subunit precursor to release α and β subunits. In line with this, the overall levels of M6P-modified proteins were reduced in cells lacking POST1 or S1P (Fig. 6D). The intracellular abundance of α-mannosidase, an M6P-modified lysosomal enzyme, was significantly decreased but its secretion to the medium was greatly increased (Fig. 6E). We next examined the volume of lysosomes using the LAMP1 antibody and Lysotracker. Lysosome enlargement is a sign of dysfunction (te Vruchte et al., 2014;Xu et al., 2014). POST1-deficient cells had enlarged lysosomes as revealed by the LAMP1 and lysotracker staining (Fig. 6F). Defective lysosomal homeostasis in the absence of POST1 eventually caused massive accumulation of cholesterol and lyso-bis-phosphatidic acid (LBPA) in the cell (Fig. 6G and 6H), since lysosomal proteins such as NPC2 need M6P modification for lysosomal targeting (Wei et al., 2017). DISCUSSION The impetus of the present study was to identify uncharacterized factors that regulate cholesterol metabolism. For this purpose, we performed a genome-scale, unbiased CRISPR/ Cas9 KO screen coupled to the "cholesterol depletion-repletion-AmB selection" challenge ( Fig. 1A), so that genes involved in LDL uptake and cholesterol trafficking to the PM were highly enriched. Our screen uncovered C12ORF49/ POST1, the loss of which increased AmB resistance (Fig. 1C), attenuated SREBP target gene expression ( Fig. 2A and 2B) and blocked SREBP processing (Fig. 3). Further examination showed that POST1 modulated SREBP signaling by accelerating generation (Fig. 4) and Golgi localization (Fig. 5) of mature S1P. In addition to SREBP activation, POST1-mediated S1P maturation is also critical for the cleavage of other S1P substrates including ATF6, CREB3L3 and the α/β-subunit precursor of GlcNAc-1phosphotransferase (Fig. 6). These results set POST1 as a key determinant for S1P maturation and lipid metabolism. Based on these findings, we attribute the survival of POST1deficient cells in the AmB screen to two reasons: 1) low LDLR expression as a result of impaired cleavage of SREBP2 (Figs. 2C and 3E); and 2) defective lysosomal cholesterol transport as a result of impaired cleavage of the α/β-subunit precursor of GlcNAc-1-phosphotransferase ( Fig. 6C and 6G). All nine members of the mammalian proprotein convertase family are synthesized as a zymogen and activated by autocatalytic cleavages of the N-terminal prodomain (Seidah and Prat, 2012). However, the self-processing of S1P is particularly complicated involving four identified cleavage sites (B', B, C' and C) and multiple cleavage steps, first at the B'/B sites and then the C'/C sites, yielding various forms of S1P with prodomain segments of different lengths bound non-covalently Elagoz et al., 2002;da Palma et al., 2014). Contrasting to those in other proprotein convertases that function as an inhibitor, the b Figure 6. POST1 affects proteolysis of other S1P substrates. (A) HeLa and HeLa/POST1 KO cells were treated with 2 μmol/L thapsigargin (Tg) for the indicated periods, and the processing of endogenous ATF6 was analyzed by immunoblotting. Pre, precursor; n, nuclear. (B) HeLa and HeLa/POST1 KO cells were transfected with pCMV-5×Myc-CREB3L3 for 48 h and treated with 2 μmol/L thapsigargin for the indicated periods. The processing of transfected CREB3L3 was analyzed by immunoblotting. pre, precursor; n, nuclear. (C) HeLa and HeLa/ POST1 KO cells were transfected with pCMV-GNPTAB miniconstruct for 48 h, and the processing of transfected α/β-subunit precursor was analyzed by immunoblotting. (D) Immunoblot analysis of HeLa, HeLa/S1P KO and HeLa/POST1 KO cells using the anti-M6P antibody and GAPDH antibody. (E) Activity of α-mannosidase in the whole cell lysates and medium of HeLa and HeLa/POST1 KO cells. Values in HeLa cells were set to 1. Data are presented as mean ± SD (2 samples/trial; 3 independent trials). Unpaired two-tailed Student's t-test, *P < 0.05, ***P < 0.001. (F) HeLa cells were transfected with the indicated siRNAs for 48 h, and stained with Lysotracker (red) for 30 min. Cells were then fixed and immunostained with the antibodies against LAMP1 (green). Scale bar, 10 μm. (G) HeLa cells were transfected with the indicated siRNAs for 48 h, fixed, stained with filipin (blue) and immunostained with the antibody against lyso-bis-phosphatidic acid (LBPA, green). Scale bar, 10 μm. (H) Quantification of filipin, Lysotracker and LBPA intensity in HeLa and HeLa/POST1 KO cells shown in (F) and (G). Data are presented as mean ± SD (10 cells/trial; 3 independent trials). Unpaired two-tailed Student's t-test. ***P < 0.001. (I) Schematic representation of POST1-promoted S1P maturation. POST1 accelerates the canonical autocleavage of S1P-B'/B at the C'/C sites, and the non-canonical autocleavage of S1P-A directly at the C'/C sites. RESEARCH ARTICLE prodomain of S1P is crucial for its folding, autoprocessing and proteolysis of substrates including SREBP2 (da Palma et al., 2014;da Palma et al., 2016). Another unique aspect of S1P is that it needs not to be completely processed to become enzymatically active, as S1P-B'/B can already cleave the SREBP2 precursor and activate the downstream signaling da Palma et al., 2014). However, this is unlikely to be the case in vivo because S1P-B'/B resides in the ER and so does SREBP under cholesterol repletion conditions, in which the SREBP pathway is known to shut down. Instead, the B'/B forms may function both as the catalyst and substrate to give rise to the fully mature enzyme that can reach the Golgi and selectively deal with the translocated SREBP as a result of cholesterol depletion. Here, we demonstrate that POST1 is a S1P cofactor that promotes generation of S1P-C'/C, either from S1P-B'/B (canonical cleavage) or directly from S1P-A (noncanonical cleavage). It is reported that all the S1P precursors reside in the ER and the mature S1P-C'/C resides in the Golgi (DeBose-Boyd et al., 1999). We demonstrate that POST1-EGFP-KDEL facilitates the production of S1P-C'/C, and that the S1P mutants that cannot be converted to the active form are mainly present in the ER (Fig. 5C and 5G). These data suggest that POST1 facilitates S1P-C'/C production in the ER and then they are transported to Golgi together. The mechanism by which POST1 contributes to S1P-C'/C production is unknown. However, this process is abolished by the D218A/H249A/S414A mutations (Fig. 4E), suggesting an absolute dependency on S1P enzymatic activity. Since autoprocessing at the C'/C sites is reported to occur in trans involving another S1P protein da Palma et al., 2014), we speculate that POST1 may facilitate this intermolecular reaction by bringing two immature S1Ps in an optimal distance or orientation, so that one as the substrate can access the catalytic triad of another. It will also be interesting to examine whether POST1 affects autoprocessing of proprotein convertase subtilisin kexin 9 (PCSK9), which is classified as non-basic proprotein convertases along with S1P and, importantly, serves as an emerging drug target for hyperlipidemia and cardiovascular disease (Burke et al., 2017). The physiological regulator of POST1 is worth investigating as well. During the preparation of our manuscript, C12ORF49 was identified as a key determinant of the SREBP pathway (Aregger et al., 2020Bayraktar et al., 2020Loregger et al., 2020). They all found that absence of C12ORF49 reduced expression of SREBP target genes or impaired SREBP cleavage, which are consistent with our results. All their results can be explained by our finding that POST1 is required for S1P maturation. For example, Loregger et al. found that knockout of C12ORF49 decreased SCAP protein level and caused SCAP relocation to the Golgi regardless of sterol levels. We believe that these phenotypes should be attributable to impairments in C12ORF49-mediated S1P maturation, and defective SREBP cleavage by S1P then prevents Golgi-to-ER transport of SCAP and causes SCAP degradation in lysosomes (Shao and Espenshade, 2014). Consistently, depletion of POST1 or S1P similarly reduced SCAP level ( Fig. S3A and S3B). As cholesterol depletion promotes ER-to-Golgi transport of SCAP and SCAP is degraded when SREBP cannot be efficiently cut by S1P, less SCAP was detected in POST1-KO cells under the sterol-depletion condition (Fig. S3A). If the reduced SCAP was the cause of impaired SREBP cleavage, the SREBP cleavage should be rescued by brefeldin A, an ER-Golgi protein trafficking inhibitor that disassembles and redistributes the Golgi complex into the ER (Sciaky et al., 1997). However, BFA largely restored the SREBP2 cleavage in the SCAP knockdown cells, with no effect on POST1-knockdown cells (Fig. S3C). So, less SCAP in POST1 KO cells is not the direct cause of impaired SREBP cleavage. In summary, our study shows that POST1 is a cofactor of S1P. It promotes autocatalytic cleavage of S1P at the C'/C sites from immature S1P-A and S1P-B'/B. Through modulating S1P maturation, POST1 is critically involved in the processing of SREBP, ATF6, CREB3 family members and other S1P substrates. Primary antibodies The following antibodies were used in this study: mouse anti-ATF6 Hamburg-Eppendorf, Germany). Primary antibodies were used at the dilution of 1:500 for immunofluorescent staining and 1:1,000 for immunoblotting. HeLa and HeLa/POST1 KO cells were maintained in Medium A. HeLa/S1P KO cells were maintained in Medium A supplemented with 5 mg/mL cholesterol, 1 mmol/L mevalonate and 20 mmol/L oleic acid. The depletion medium was DMEM supplemented with 5% LPDS, 1 μmol/L lovastatin and 10 μmol/L mevalonate. 1 μg/mL 25-HC or 50 μg/mL LDL was added into the depletion medium if required. Cells were grown in a monolayer at 37°C with 5% CO 2 . Genome-wide CRISPR/Cas9 screen coupled to AmB selection HeLa/Cas9-Flag cells were transduced with lentivirus expressing a pooled GeCKO v2 library containing 65,383 sgRNAs targeting 19,050 human genes (3 sgRNAs per coding gene and 4 sgRNAs per microRNA) at 0.3 multiplicity of infection. Cells were incubated with Medium A containing 4 μg/mL puromycin for 4 days and then Medium A for another 3 days. A subpopulation of cells was collected to evaluate sgRNA target diversity. A greater than 300× library coverage was achieved. Transduced HeLa/Cas9-Flag cells were treated with the depletion medium for 16 h and then incubated with the depletion medium plus 50 μg/mL LDL and, if necessary, 2 μg/mL U18666A for 4 h. Cells were then incubated in the depletion medium supplemented with 50 μg/mL LDL and 300 µg/mL amphotericin B for 1 h. Cells were washed with PBS and incubated with Medium A for 4 days. A total of 5 rounds of "cholesterol depletion-repletion-AmB selection" was performed. The sgRNA inserts from surviving cells and those from transduced cells prior to the first round of selection were amplified and subjected to deep sequencing. Immunofluorescence Cells grown on coverslips were fixed with 4% paraformaldehyde for 30 min and treated with 0.2% Triton X-100 in PBS for 5 min. Cells were washed with PBS and incubated with primary antibodies for 1 h at room temperature. After washing with PBS, cells were incubated with 3% bovine serum albumin (BSA) in PBS and appropriate secondary antibodies at a concentration of 1:1,000 for 1 h at room temperature. Cells were finally counterstained with 300 nmol/L DAPI in PBS for 5 min. Confocal images were acquired by a Leica Biosystems SP8 laser scanning microscope. The contours of cell, Golgi and nuclei were outlined manually, and background-subtracted fluorescent intensity was quantified using ImageJ. Lysotracker and filipin staining Cells were transfected with the indicated siRNA for 48 h and incubated with Medium A supplemented with 100 nmol/L Lysotracker for 30 min. Cells were then fixed and stained with 50 μg/mL filipin (prepared as 5 mg/mL stock solution in ethanol) in PBS for 1 h at room temperature. Immunoblotting analysis Cells at a confluency of 80%-90% were harvested and homogenized with 120 μL of RIPA buffer supplemented with protease inhibitors. After centrifuging at 13,400 ×g for 10 min, supernatants were collected and protein concentration was determined using the BCA kit (ThermoFisher Scientific). If needed, 10 units/μL endoglycosidase H or 5 units/μL peptide N-glycosidase F was incubated with supernatants at 37°C for 1 h. Supernatants were mixed with 4× loading buffer and boiled for 10 min. Proteins were resolved by SDS-PAGE and transferred to PVDF membrane. Blots were blocked with 5% BSA in TBS plus 0.075% Tween (TBST) and probed with primary antibodies overnight at 4°C. After TBST wash, blots were incubated with secondary antibodies for 1 h at room temperature. norleucinal at a final concentration of 25 μg/mL for 1 h at 37°C prior to harvesting. Quantitative real-time PCR Total RNA was extracted from HeLa cells transfected with the indicated siRNAs (listed in Supplementary Material, Table S5) or indicated knockout cells using TRIzol (Invitrogen No. 15596018). Equal amounts of RNA were used for cDNA synthesis followed by quantitative real-time PCR as previously described (37). The relative mRNA levels were calculated using the comparative CT method. Human GAPDH was used as the control. All qPCR primers are listed in Supplementary Material, Table S6. Gene expression in cells transfected with scramble siRNAs was defined as 1. Mass spectrometry HeLa cells or HeLa/POST-3×Flag cells were homogenized in NP40 buffer (0.5% NP40 in PBS containing 5 mmol/L EDTA). After centrifuging at 2,000 ×g for 10 min, supernatants were collected and pre-cleared with protein A/G beads at 4°C for 1 h. Mixtures were centrifuged at 1,000 ×g for 10 min, and supernatants were incubated with anti-Flag beads at 4°C for 4 h. Beads were spun down and washed with NP40 buffer for 5 times, and proteins coupled to the beads were eluted by 0.1 mg/mL 3×Flag peptides. Eluents were collected and analyzed by liquid chromatograph-mass spectrometer. The intensity of protein present in HeLa/POST1-3×Flag cells divided by that in HeLa cells was defined as fold change. Proteins with fold change >1.5 were collected, and those detected in all three independent experiments were identified as POST1-interacting proteins. Statistical analysis Data were expressed as means ± SD and analyzed by GraphPad Prism 6 software. Sample sizes, biological duplicates, statistical tests, P values were indicated in the figure legends. Statistical significance was set at P < 0.05.
9,206.4
2020-07-14T00:00:00.000
[ "Biology", "Medicine" ]
Patterns of Flavour Violation in Models with Vector-Like Quarks We study the patterns of flavour violation in renormalisable extensions of the Standard Model (SM) that contain vector-like quarks (VLQs) in a single complex representation of either the SM gauge group $G_{SM}$ or $G_{SM}' = G_{SM} \times U(1)_{L_\mu - L_\tau}$. We first decouple VLQs in the $M=(1-10)$ TeV range and then at the electroweak scale also $Z, Z'$ gauge bosons and additional scalars to study the resulting phenomenology that depends on the relative size of $Z$- and $Z'$-induced flavour-changing neutral currents, as well as the size of $\Delta F=2$ contributions including the effects of renormalisation group Yukawa evolution from $M$ to the electroweak scale that turn out to be very important for models with right-handed currents through the generation of left-right operators. In addition to rare decays like $P\to \ell\bar\ell$, $P\to P' \ell\bar\ell$, $P\to P'\nu\bar\nu$ with $P=K, B_s, B_d$ and $\Delta F=2$ observables we analyze the ratio $\varepsilon'/\varepsilon$ which appears in the SM to be significantly below the data. We study patterns and correlations between these observables which taken together should in the future allow for differentiating between VLQ models. In particular the patterns in models with left-handed and right-handed currents are markedly different from each other. Among the highlights are large $Z$-mediated new physics effects in Kaon observables in some of the models and significant effects in $B_{s,d}$-observables. $\varepsilon'/\varepsilon$ can easily be made consistent with the data, implying then uniquely the suppression of $K_L\to \pi^0 \nu\bar\nu$. Significant enhancements of $Br(K^+\to \pi^+ \nu\bar\nu)$ are still possible. We point out that the combination of $\Delta F=2$ and $\Delta F=1$ observables in a given meson system generally allows to determine the masses of VLQs independently of the size of VLQ couplings. Introduction Among the simplest renormalisable extensions of the Standard Model (SM) that do not introduce any additional fine tunings of parameters are models in which the only new particles are vector-like fermions. Such fermions can be much heavier than the SM ones as they can acquire masses in the absence of electroweak symmetry breaking. If in the process of this breaking mixing with the SM fermions occurs, the generation of flavourchanging neutral currents (FCNC) mediated by the SM Z boson is a generic implication. If in addition the gauge group is extended by a second U(1) factor, a new heavy gauge boson Z is present and additional heavy scalars are necessary to provide mass for the Z and to break the extended gauge-symmetry group down to the SM gauge group. There is a rich literature on FCNCs implied by the presence of vector-like quarks (VLQs), see in particular [1][2][3][4][5][6][7][8][9][10][11][12]. The goal of the present paper is an extensive study of patterns of flavour violation in models with VLQs that are based on the following gauge groups: The choice of the particular symmetry group U(1) Lµ−Lτ [13,14] is phenomenologically motivated by the fact that it allows in a simple manner to address successfully the LHCb anomalies [9,15], while being anomaly-free and containing less parameters than general Z models [16]. In our paper we will be guided by the analyses in Refs. [3,11,17] which identified all renormalisable models with additional fermions residing in a single vector-like complex representation of the SM gauge group with a mass M . It turns out that there are 11 models where new fermions have the proper quantum numbers so that they can couple in a renormalisable manner to the SM Higgs and SM fermions, thereby implying new sources of flavour violation. Our analysis will concentrate on FCNCs in the K, B d and B s systems, therefore only the five models with couplings to down quarks are relevant for us, as specified in Section 2. We call this class of models G SM -models. Consequently the models based on the gauge group G SM are called G SM -models. The VLQs in these models belong to the same representations under G SM as in G SM -models, but are additionally charged under U(1) Lµ−Lτ . These models also contain new heavy scalars. As we will discuss in detail in Section 2 and Section 5, the patterns of flavour violation in G SM -models and G SM -models differ significantly from each other: • In G SM -models Yukawa interactions of the SM scalar doublet H involving ordinary quarks and VLQs imply flavour-violating Z couplings to ordinary quarks, which then dominate |∆F | = 1 FCNC transitions. However, the situation in |∆F | = 2 transitions is much more involved and depends on whether right-handed (RH) or left-handed (LH) flavour-violating quark couplings to the Z are present. If they are RH the effects of renormalisation group (RG) evolution from M (the common VLQ mass) down to the electroweak scale, µ EW , generate left-right operators [18] via top-Yukawa induced mixing. These operators are strongly enhanced through QCD RG effects below the electroweak scale and in the case of the K system through chirally enhanced hadronic matrix elements. They dominate then new physics (NP) contributions to ε K , but in the B s,d meson systems for VLQ-masses above 5 TeV they have to compete with contributions from box diagrams with VLQs [11]. If they are LH the Yukawa enhancement is less important, because left-right operators are not present and box diagrams play an important role both in the B s,d and K systems. • In G SM -models the pattern of flavour violation depends on the scalar sector involved. We consider only models in which at least one of the additional scalars is charged under U(1) Lµ−Lτ in such a way that Yukawa couplings between the given VLQ and ordinary quarks are allowed. If this is the case for a new scalar which is just a singlet S under the SM group, the latter imply flavour-violating Z couplings to ordinary quarks without any FCNCs mediated by the Z. In the following we refer to these models as G SM (S)-models. If, on the other hand, such a Yukawa coupling requires the scalar to be a doublet Φ, both tree-level Z and Z contributions to flavour observables will be present. Their relative size depends on the model parameters, specifically the Z mass. In these cases we introduce again an additional scalar singlet, but without Yukawa couplings, since otherwise the Z mass would have to be of the order of the electroweak scale, which is phenomenologically very difficult to achieve. In the following we refer to these models as G SM (Φ)-models. In this manner we will consider three classes of VLQ models with rather different patterns of flavour violation: in which |∆F | = 1 FCNCs are mediated by the Z, Z and both, respectively. In G SM (Φ) models |∆F | = 2 transitions are dominated for M ≥ 5 TeV by box diagrams with VLQs and scalar exchanges, while in the G SM (S) models also tree-level Z exchanges can play an important, sometimes dominant, role. A particular feature of G SM models are the top-Yukawa induced RG effects to |∆F | = 2 transitions that are largest for RH scenarios and are absent in G SM models. In [11] an extensive analysis of the G SM -models has been performed and a subset of G SM -models has been analyzed in [9,15]. Therefore it is mandatory for us to state what is new in our article regarding these models: • The authors of [11] concentrated on the derivation of bounds on the Yukawa couplings as functions of M but did not study the correlations between various flavour observables which is the prime target of our paper. Similar comments apply to [9]. • NP contributions to flavour observables depend in each model on the products of complex Yukawa couplings λ * s λ d , λ * b λ d and λ * b λ s for s → d, b → d and b → s transitions, respectively, as well as the VLQ mass M . This structure allows to set one of the λ q -phases to zero, such that each model depends on only five Yukawa parameters and M , implying a number of correlations between flavour observables. The strongest correlations are, however, still found between observables corresponding 4 to the same flavour-changing transition, and we concentrate our analysis on them. The correlations between observables with different transitions are weaker, but could turn out to be useful in the future when the data and theory improve, in particular in the context of models for Yukawa couplings. • An important novelty of our paper, relative to [9,11,15], is the inclusion of the ratio ε /ε in our study. Recent analyses indicate that the measurement of ε /ε is significantly above its SM prediction [19][20][21][22]; it is hence of interest to see which of the models analyzed by us, if any, are capable of addressing this tension and what the consequences for other observables are. • Another important novelty in the context of VLQ models and |∆F | = 2 transitions in general is the inclusion of the effects of RG top-Yukawa evolution from M to the electroweak scale that turn out to be very important for models with RH currents through the generation of left-right operators contributing to these transitions as mentioned above. This changes markedly the pattern of flavour violation in such models relative to models with LH currents where no left-right operators are generated. Our paper is organized as follows. In Section 2 we present the particle content of the considered VLQ models, together with the gauge interactions, Yukawa interactions and the scalar sector. In Section 3 we perform the decoupling of the VLQs and construct the effective field theory (G ( ) SM -EFT) for each model for scales µ EW < µ < M . Section 4 is devoted to the matching of these EFTs to phenomenological ones describing |∆F | = 1, 2 processes below the scale µ EW . This results in explicit flavour-violating couplings of the Z and Z to the SM quarks. These enter the effective Lagrangians for the various flavour-changing processes, from which we derive the explicit formulae for the considered observables. In Section 5 we describe the patterns of flavour violation expected in different models, summarizing them with the help of two DNA tables. In Section 6, after formulating our strategy for the phenomenology, we present numerical results of our study. We conclude in Section 7. Several appendices collect additional information on the models, the decoupling of VLQs, RG equations in the G SM -EFT, the considered decays, some technical details and the input and statistical procedure used in the numerical analysis. The VLQ Models Throughout the article we focus on models with vector-like fermions residing in complex representations, either of the the SM gauge group G SM or its extension by an additional gauged (L µ − L τ ) symmetry, U(1) Lµ−Lτ . For both models we adapt the usual SM fermion content of the three generations (i = 1, 2, 3) of quarks (q i L = (u i L , d i L ) T , u i R , d i R ) and leptons (L i L = (ν i , i L ) T , i R ), which acquire masses via spontaneous symmetry breaking from the standard scalar SU(2) L doublet H. The gauged (L µ − L τ ) symmetry is anomaly-free in the SM [13,14]. The only nonvanishing (L µ − L τ ) charges of the SM fermions are introduced as Here L 2 L = (ν µ , µ L ) and L 3 L = (ν τ , τ L ) are left-handed SU(2) L doublets and µ R and τ R right-handed singlets. We normalize the (L µ − L τ ) charges of the leptons without loss of generality by setting Q = 1. The SM quarks do not couple directly to the U(1) Lµ−Lτ gauge boson Z . However, such couplings are generated in G SM models through Yukawa interactions of SM quarks with VLQs that couple directly to Z . VLQ Representations As we are mainly interested in the phenomenology of down-quark physics, we will restrict our analysis to SU(3) c triplets and consider the following five models with SU(2) L singlets, doublets and triplets: where the transformation properties are indicated as (SU(2) L , U(1) Y , U(1) Lµ−Lτ ), i.e. X denotes the charge under U(1) Lµ−Lτ . It is implied that in G SM -models the U(1) Lµ−Lτ charge should be omitted. The representations D, Q V , Q d , T d , T u correspond to the models V, IX, XI, VII, VIII introduced in Ref. [11], where a complete list of renormalisable models with vector-like fermions under G SM can be found, see also [3,17]. Concerning G SM , the combination of representations D, Q V and additionally U (1, +2/3, −X) has been studied first in [9]. The kinetic and gauge interactions of the new VLQs are given by Tr T a (iD / − M Ta ) T a , (6) with appropriate covariant derivatives D µ and we follow [11] for the triplet representations as given in (2.13) and (2.14) of that paper. The masses M of the VLQs introduce a new scale, which we will assume to be significantly larger than all other scales. The covariant derivative is, omitting the SU(3) c part, with the gauge couplings g 2,1 and g of SU(2) L , U(1) Y and U(1) Lµ−Lτ , respectively, and charges Y and Q of U(1) Y and U(1) Lµ−Lτ . The Pauli-matrices are denoted by σ a . The "hat" onẐ µ indicates that we deal here with the gauge eigenstate and not mass eigenstate, see (100). 6 G SM The scalar sector consists of the SM scalar doublet H with its usual scalar potential. The VLQs interact with SM quarks (q L , u R , d R ) via Yukawa interactions where H ≡ iσ 2 H * . The complex-valued Yukawa couplings λ VLQ i give rise to mixing with the SM quarks and flavour-changing Z-couplings, which have been worked out in detail [3,11] and are discussed in Section 3.1. G SM (S) In models with an additional U(1) Lµ−Lτ the scalar sector has to be extended in order to generate the mass of the corresponding gauge boson Z . A complex scalar S(1, 0, X) (SU(3) c singlet) is added in the minimal version. As VLQs are charged under U(1) Lµ−Lτ , their Yukawa couplings with the SM doublet H are forbidden, but the ones involving S are allowed for Q S = ±Q VLQ and given by [9] In fact this scalar system is sufficient for models with VLQs having U(1) Y charges Y = −1/3 and +1/6 of the SM fermions d R and q L , respectively. In the following we refer to these models as G SM (S)-models. The special feature of these models is that because of the absence of tree-level Z contributions tree-level Z exchanges dominate ∆F = 1 transitions and in some part of the parameter space can also compete with contributions from box diagrams with VLQs and scalars in the case of ∆F = 2 transitions. G SM (Φ) For VLQs with G SM quantum numbers different from one of the SM quark fields, the simple extension by a scalar singlet is not possible. In a next-to-minimal version we therefore add to the scalar sector an additional scalar SU(2) L doublet Φ(2, +1/2, X), besides the SM-like H(2, +1/2, 0). We require |X| = 1, 2 in order to avoid lepton-flavour violating (LFV) Yukawa couplings -see for example [23] -and in consequence there are no LFV Z couplings, which are subject to strong constraints at low energies. The vacuum expectation value (VEV) of Φ gives an unavoidable contribution to the Z mass of the order of the electroweak scale, contributes to the mass of H and generates potentially large Z − Z mass mixing effects. The latter would be strongly constrained by electroweak precision tests [24], in particular there would be sizeable corrections to the Z couplings to muons. In order to avoid these difficulties, Φ is accompanied by an additional complex scalar singlet S(1, 0, Y ), which breaks the U(1) Lµ−Lτ symmetry at the TeV scale. The L µ − L τ charge of S is chosen to be Y = X/2 in order to avoid the appearance of a Goldstone boson in the scalar sector and to forbid Yukawa couplings of S with SM fermions and VLQs. The Yukawa interactions of the VLQs with Φ are with Φ ≡ iσ 2 Φ * and we will refer to these models as G SM (Φ)-models. We note that the structure of couplings equals the one of G SM models given in Eq. For the VLQ D(1, −1/3, X) we consider thus two versions, one in G SM (S) and one in the G SM (Φ)-model. We refrain from the same procedure for Q V (2, +1/6, X). In G SM (Φ) models FCNCs are mediated by both Z and Z but in the case of ∆F = 2 transitions box diagrams with VLQs and scalars play the dominant role for sufficiently large M . For ease of notation, we will sometimes refrain below from explicitly labelling the λ i by the VLQ representation, as should be done if several of them are considered simultaneously. Yukawa couplings of several representations In our numerics we will consider one VLQ representation at a time as this simplifies the analysis significantly. In particular the number of parameters is quite limited. Still it is useful to make a few comments on the structure of flavour-violating interactions and at various places in our paper to state how our formulae would be modified through the presence of several VLQ representations in a given model. We plan to return to the phenomenology of such models in the future. When admitting several VLQ representations F m and F n simultaneously, potentially additional locally gauge-invariant Yukawa couplings ∼ λ mn F m L ϕ mn F n R with ϕ mn = H have to be included in the case of G SM -models [3]. They give rise to flavour-changing neutral Higgs currents at tree level. In the G SM -models the U(1) Lµ−Lτ -charges of the additional ϕ mn = S, Φ have been chosen following the criteria explained above, which fixes in turn the U(1) Lµ−Lτ -charges of the VLQs. In consequence such couplings to ϕ mn = S, Φ are not permitted, however they are still allowed for ϕ mn = H, which has zero U(1) Lµ−Lτ -charge. In G SM (S) models, only the particular choice of the U(1) Lµ−Lτ charges Q Q V = −Q D [9] forbids these couplings to H, whereas the choice Q Q V = Q D would allow them, due to the possibility to replace Q V R q i L →q i L Q V R in Eq. (9), which maintains gauge invariance since S is a singlet. On the other hand, in G SM (Φ) models such couplings arise for Q d with D and T d . Another important consequence of the presence of several representations is the generation of left-right |∆F | = 2 operators in models with both LH and RH currents via box diagrams discussed in Section 3.2, which is the case when singlets or triplets together with doublets are present. In the case of a single representation such operators can also be generated in models with doublets through the top-Yukawa RG evolution from M to the electroweak scale, see Section 3.3. Scalar sectors In the G SM -models, the scalar sector contains only the standard doublet H(2, +1/2, 0), which provides masses to gauge bosons and standard fermions in the course of spontaneous symmetry breaking of SU (2) In G SM (S)-models the doublet H(2, +1/2, 0) fulfils again the same role, whereas the singlet S(1, 0, X) provides via its VEV S = v S / √ 2 a mass for the additional U(1) Lµ−Lτ Z -gauge boson In G SM (Φ)-models the doublet Φ 2 ≡ H(2, +1/2, 0) gives masses to the chiral fermions, whereas Φ 1 ≡ Φ(2, +1/2, X) contributes to the masses of the Z and Z gauge bosons in combination with S(1, 0, X/2). 1 The neutral components of the doublets acquire VEV's with 0 ≤ β ≤ π/2. In this case, neutral gauge boson mixing occurs with details given in Appendix A.2. Further details on the scalar sectors of the G SM (S) and G SM (Φ) models are collected in Appendix A.1 and A.2, respectively. In Table 1 we summarize all G SM -models and indicate which diagrams dominate NP contributions to |∆F | = 1 and |∆F | = 2 transitions in a given model. VLQ Representation Scalar Singlet Scalar Doublets In the last two columns we show which diagrams dominate NP contributions to |∆F | = 1 and |∆F | = 2 transitions for M ≥ 5 TeV. Decoupling of VLQs The VLQ models are characterised by the masses M of the VLQs, the various Yukawa couplings λ VLQ i (i = 1, 2, 3) of Section 2.2 and the VEVs of the respective scalar sectors, see Section 2.3. The present lower bound on M from the LHC is in the ballpark of 1 TeV, while the lower bounds on M Z are typically close to 3 TeV if Z has a direct coupling to light quarks. But as emphasized in [9,15,25], Z of U(1) Lµ−Lτ does not have such couplings, implying a much weaker lower bound on its mass, which could in fact be as low as the electroweak scale and even lower. While it could also be as heavy as the VLQ mass, we will assume the hierarchy in order to simplify the analysis. It is then natural to decouple first the VLQs and to consider EFTs for G SM and G SM valid between the scales µ M ∼ M and µ EW ∼ v v S . These are subsequently matched in one step onto SU(3) c ⊗ U(1) em -invariant phenomenological EFTs of |∆F | = 1, 2 decays, which are valid between µ EW and µ b ∼ m b , where m b denotes the bottom mass. The coefficients determined in the process will indicate which operators are the most important. In principle one could consider an intermediate EFT which is constructed by integrating out Z and the new scalars before integrating out top quark, W and Z, but from the point of view of renormalisation group effects, integrating out all these heavy fields simultaneously appears to be an adequate approximation. In this section we present the results from the decoupling of the VLQs that are important for our phenomenological applications within the framework of the G which are invariant under either G SM or G SM , depending on the model. Thus in G SMmodels L dim−4 coincides with the SM Lagrangian and the corresponding non-redundant set of operators of dimension six has been classified in Ref. [26]. In G SM -models operators that are invariant under G SM must be added, which involve the Z -boson and the additional scalar singlets and/or doublets. The Wilson coefficients C a 2 are effective couplings, which are suppressed by 1/M 2 and their effects on observables by v 2 i /M 2 compared to the SM, with v i = (v, v 1 , v S ) depending on the model. They are determined at the scale µ M when decoupling VLQs. The decoupling proceeds either by explicit matching calculations starting at tree-level and including subsequently higher orders or by integrating them out in the path integral method [3]. The tree-level decoupling has been known for a long time for G SM models [3] and is given for G SM (S) models in Ref. [9]. Within the EFT, RG equations allow to evolve the Wilson coefficients from µ M down to µ EW .In leading logarithmic approximation and retaining only the first logarithm (1stLLA) it has the approximate solution Figure 1: Tree-level graphs (a) and (b) of the decoupling of a VLQ F m that give rise to ψ 2 ϕ 2 D operators. They proceed via their Yukawa interactions with scalars ϕ = (H, S, Φ) and SM quarks ψ = (q L , u R , d R ). The gauge boson G µ depends on the representation. Tree-level graph (c) requires two representations F m,n with a Yukawa coupling via ϕ c and give rise to ψ 2 ϕ 3 operators. which holds as long as the second term remains small compared to the first. The anomalous dimension matrices (ADM) γ ab depend in general on couplings of the gauge, Yukawa and scalar sectors and are known for the G SM -EFT [27][28][29]. Largest contributions might be expected for the case of γ ab ∝ Y † u Y u ∼ y 2 t mixing due to the top-quark Yukawa coupling y t ∼ 1 of the order of a few percent in the case of self-mixing (a = b) and from the mixing due to QCD under α s . On the other hand, for a = b non-zero Wilson coefficients can be generated at 1stLLA order. 3 In particular, as we will see below, in the case of models with right-handed neutral currents left-right operators can be generated in this manner with profound direct impact on |∆F | = 2 transitions, thereby affecting the predictions for |∆F | = 1 observables. The VLQs have a very limited set of couplings to light fields, which are either via gauge interactions (6) to the gauge bosons or via Yukawa interactions (8)-(10) to light -w.r.t. to VLQ mass M -SM quarks and scalars ϕ = H, S or Φ, depending on the model. At tree-level, this particular structure of interactions can give rise only to flavour-changing Z and Z couplings, whereas all other decoupling effects are loop-suppressed [30]. The decoupling of the VLQs proceeds in the unbroken phase of SU(2) L ⊗ U(1) Y , hence quark fields are flavour-eigenstates and neutral components of scalar fields are without VEV at this stage. After the RG evolution from µ M to µ EW , spontaneous symmetry breaking will take place within the G ( ) SM -EFTs and the transformation from flavour-to mass-eigenstates for fermions and gauge bosons can be performed, accounting for the dimension six part in Eq. (15). Tree-level decoupling and Z and Z effects The couplings of the VLQs permit at tree level only a dimension six contribution from the generic 4-point diagram in Fig. 1a. Since its dimension-five contribution vanishes [3], it is equivalent to consider the 5-point diagram Fig. 1b, where either SU(2) L or U(1) Y gauge bosons in G SM -models or in addition aẐ in G SM -models is radiated off the VLQ [3, [26] for ψ 2 ϕ 2 D operators, except for the signs of gauge couplings in the covariant derivatives, and (ψ 2 ϕ 3 + h.c.) operators in the case of G SM models and extend them to G SM (S) and G SM (Φ)-models (ϕ = H, S, Φ). Superindices i, j = 1, 2, 3 on quark fields denote the generations. These are all operators that could arise from tree-level decoupling of VLQs, depending on the model. 9]. As a consequence, in G SM -and G SM -models only operators of the type ψ 2 ϕ 2 D ∝ (ϕ † i ← → D µ ϕ)[ψ i γ µ ψ j ] (ϕ = H, S, Φ) receive non-vanishing contributions at tree-level, which are projected in part onto ψ 2 ϕ 3 -type operators via equation of motions (EOM) [26,31]. We list the corresponding definitions of the operators in Table 2, following the notation of [26] in the case of the G SM -EFT and extending it to G SM -EFTs. After spontaneous symmetry breaking the ψ 2 ϕ 3 operators contribute to the quark masses m ψ (ψ = u, d) at the scale µ EW via which allows to substitute Yukawa couplings Y ψ in terms of measured m ψ and new physics parameters If several representations of VLQs are present in a given model and two of them F m,n couple to a scalar ϕ c 4 via Yukawa couplings λ mn , a third possibility is allowed at tree-level depicted in Fig. 1c, which contributes directly to ψ 2 ϕ 3 operators and gives rise to flavour-changing neutral Hψ i ψ j interactions at tree-level [3]. The various possibilities for G SM models, where ϕ c = H, can be found in [3]. The relation of quark masses to the Yukawa interactions (17) includes now also 1/M 2 contributions. Their diagonalisation proceeds as usual for the quark fields with the help of 3 × 3 unitary rotations in flavour space: implying with diagonal up-and down-quark masses m diag ψ and the unitary quark-mixing matrix V . In the limit of vanishing dimension-six contributions, V will become the Cabibbo-Kobayashi-Maskawa (CKM) matrix of the SM. Throughout we will assume for down quarks the weak basis in which the mass term m d is already diagonal, implying q L = (V † u L , d L ) T . This fixes also the definition of the Wilson coefficients C ψ 2 ϕ 2 D (for more details see [32]) and the basis for the VLQ Yukawa couplings λ VLQ i . After spontaneous symmetry breaking the ψ 2 ϕ 2 D operators give rise to flavour-changing Z and Z interactions for fermions (f = , u, d), which we parametrise as follows: For completeness, we provide the matching conditions for the Wilson coefficients in Appendix B. We note that RG effects have been neglected in (20) and (21) since they are only due to self-mixing of ψ 2 ϕ 2 D operators as listed in Appendix B.3. The flavour-diagonal (i = j) couplings of leptons to the Z will be set to the ones of the SM as corrections from NP to them are in G SM -models one-loop suppressed. This is also the case of G SM (S) models where Z does not play any role in FCNCs. In G SM (Φ) models modifications of the Zff couplings come from Z − Z mixing. These shifts are relevant for leptons in partial widths of Z → ¯ (see Appendix A.2) and could be of relevance in electroweak precision tests. In the semi-leptonic |∆F | = 1 FCNCs we will include them for consistency in G SM (Φ) models, although they are negligible in comparison to other effects. G SM -models In the case of G SM -models, the decoupling of VLQs gives the results for ∆ L,R (Z) couplings collected for down-quarks in Table 3, where Except for the sign in the case of T u , our results agree with those in [11]. Furthermore, also non-zero couplings to up-type quarks arise [11] but they will not play any role in our paper. G SM -models In the G SM -models, the (L µ − L τ ) symmetry fixes the Z coupling to leptons to be L,R (Z) for down-and up-type-quark couplings (i, j = 1, 2, 3) to the Z boson in G SM -models. Here V ij is the CKM matrix and ∆ u = ∆(λ V d i → λ Vu i ), see (8). Here we have neglected Z − Z mixing effects existing in G SM (Φ)-models. However, for consistency we have to include these effects in the couplings of the Z to leptons to first order in the small mixing angle ξ ZZ (see Appendix A.2 for details). On the other hand, the gauge couplings to quarks are model dependent. In G SM (S)-models the scalar sector of S and H generates only non-zero quark couplings to Z , whereas in G SM (Φ)-models the scalar sector of S, H and Φ gives rise to non-zero couplings of SM quarks to both Z and Z. We define with ∆ ij defined in Eq. (22) and the Z − Z mixing angle [see (102)] Here c β ≡ cos β is a parameter associated with the scalar sector (see (13)) of G SM (Φ)models, i.e. v 1 = v cos β. The ξ ZZ describes Z − Z mixing, which is phenomenologically constrained to be small, ξ ZZ < 0.1, due to constraints from the Z-boson mass, M Z , and partial widths Z → ¯ measured at LEP, as described in more detail in Appendix A.2. The down-and up-quark couplings to Z and Z are collected for these models in Table 4. We confirm previous findings [9] for the G SM (S)-models. We note that the Z couplings are suppressed/enhanced by the ratio r w.r.t. the Z-couplings. Enhancement takes place for 2 g X > g Z ≈ 0.75, such that for example r ≈ 3 can be reached with g X ≈ 1.1, still within the perturbative regime. The couplings of T d and T u differ just by a sign and factors 1/2. In distinction to Z-contributions in G SM -models, both Z-and Z -contributions in G SM (Φ) models decouple with large tan β, see K ij in Eq. (25). Decoupling at one-loop level All other decoupling processes proceed via loops. Those that would lead to non-canonical kinetic terms in the G ( ) SM -EFTs can be absorbed by a suitable choice of wave-function renormalisation constants in the full theory above the scale µ M , resulting in non-minimal renormalisation of interactions and giving rise to finite threshold effects of coupling constants. In G SM -models this is the case for kinetic mixing of B µ andẐ µ , which enters our analysis only as a higher order effect. All other effects enter as dimension six operators. The ones with four quarks are most important for quark-flavour phenomenology. They involve only VLQ-Yukawa interactions, as depicted in Fig. 2a and Fig. 2b, and give rise to ψ 4 -type operators, among which are also |∆F | = 2 operators. Here we match directly to the operators present in the phenomenological EFT of |∆F | = 2 decays, using the conventions in Appendix C.1, avoiding thereby the intermediate matching to the G SM -invariant form. 5 Still, we outline this step for completeness here. In the VLQ models considered, there are four relevant ψ 4 operators in G ( ) SM -EFTs at the VLQ scale µ M and a fifth operator is generated due to 5 Note that the set of ψ 4 -type operators is the same in all G ( ) SM models and a non-redundant set can be found in Ref. [26]. QCD mixing via RG evolution from µ M to µ EW . These are the (LL)(LL) operators and the (RR)(RR) operator with kl = ij for |∆F | = 2 processes and the T A denoting SU(3) c colour generators. Their Wilson coefficients are matched to the ones of the |∆F | = 2 phenomenological EFT at the electroweak scale µ EW [32] as where N ij is given in (134). Here we anticipate this matching to the VLQ scale µ M as there are no RG effects of phenomenological importance for the discussion of B-meson and Kaon sectors. For more details see Section 3.3, where also QCD mixing is given for these operators. Since the Wilson coefficients of these operators are generated at µ M at one-loop, their interplay with other sectors in quark-flavour physics due to RG mixing are considered higher order and hence beyond the scope of our work. In G SM -models VLQs contribute to |∆F | = 2 operators O ij a for a = VLL, VRR, LR1 via box diagrams (see Figs. 2a and 2b), which contain two heavy VLQ propagators with representations F m and F n and massless components of the standard doublet H = (H + , H 0 ) T . These box diagrams yield the general structure of the Wilson coefficients at the scale µ M . Here the prefactor corresponds to the SM normalisation of the |∆F | = 2 EFT, see (134). The function depends on the VLQ masses of representations F m,n . The couplings Λ m ij are The index a of the operator and the numerical factors η mn are collected in Table 5. Note that a = VLL for F m,n = D, T d , T u , and a = VRR for F m,n = Q d , Q V , whereas a = LR1 for F m = D, T d , T u and F n = Q d , Q V . The factors η mn are positive except for interference of F m = D, Q d , T d with F n = Q V , T u , because in this case the scalar propagators are crossed, which gives rise to an additional sign w.r.t. the diagram with non-crossed scalar propagators. For F m = F n , these results agree with [11] for D, T u , T d , but for Q d (model XI) we find an additional factor of 2. Concerning Q V (model IX) we find a contribution to ∼ O VRR instead of ∼ O VLL and also opposite sign. For completeness we provide also the results for F m = F n . In G SM (S) models we consider only VLQs D and Q V and their interference which agrees with [9] except for a minus sign from crossed scalar propagators in the interference term D × Q V . The results for G SM (Φ) models can be found straight-forwardly from the ones of the G SM models, bearing in mind that (8) and (10) are equivalent up to the replacement H → Φ. Renormalisation group evolution The VLQ tree-level exchange in the considered VLQ scenarios generates only ψ 2 ϕ 2 Dand ψ 2 ϕ 3 -type operators at the scale µ M with nonvanishing Wilson coefficients (see Appendix B) depending on the VLQ scenario. 6 The RG evolution from µ M down to µ EW can induce via operator mixing leading logarithmic contributions also to other classes of operators in G ( ) SM EFTs at the scale µ EW . These operators are possibly related to a variety of processes and thus imply additional potential constraints. The largest enhancements can appear if the ADM γ ab in (16) is proportional to the strong coupling 4πα s ∼ 1.4 or the top-Yukawa coupling y t ∼ 1. Note that QCD mixing is flavour-diagonal and hence can not give rise to new genuine phenomenological effects, i.e. one can not expect qualitative changes. On the other hand, Yukawa couplings are the main source of flavour-off-diagonal interactions and we will focus on these here. The SU(2) L gauge interactions induce via ADMs γ ab ∝ g 2 2 [29] only intra-generational mixing between u i L ↔ d i L and are parametrically smaller than y t -induced effects, such that we do not consider them here. The U(1) Y gauge interactions are only flavour-diagonal and numerically even more suppressed. Concerning G SM models, RG effects due to top-Yukawa couplings are absent for ψ 2 ϕ 2 D and ψ 2 ϕ 3 operators, because ϕ = S, Φ do not have Yukawa couplings to q L , u R , d R , which are forbidden by their additional U(1) Lµ−Lτ charge. Hence RG effects as discussed below are not present in these scenarios. The ADMs due to Yukawa interactions can be found in [28] for the G SM -EFT (ϕ = H) and we collect the ones involving the Wilson coefficients (35) in Appendix B.3. The RG equations of these Wilson coefficients are also coupled with those of SM couplings, such as the quartic Higgs coupling and quark-Yukawa couplings [27], but in 1stLLA they decouple. The modification of SM couplings due to dim-6 effects can be neglected when discussing the RG evolution of dim-6 effects themselves in first approximation. Moreover, the quartic Higgs coupling is irrelevant for the processes discussed here and the quark masses are determined from low-energy experiments, i.e. much below µ EW . Hence phenomenologically most interesting are RG effects of mixing of ψ 2 H 2 D and ψ 2 H 3 operators into other operator classes that do not receive tree-level matching contributions at µ M . Those classes are where we list in parentheses the number of operators. 7 We focus on the ψ 4 operators, which all turn out to be four-quark operators, because they are most relevant for processes of down-type quarks considered here. We comment shortly on the H 6 and H 4 D 2 classes in Appendix B.3. The RG equation (16) implies for a specific a ∈ ψ 4 , see also [18], where a = b, such that 1stLLA contributions are one-loop suppressed w.r.t tree-level generated ψ 2 H 2 D contributions. Three of the ψ 4 operators (O qd ) can mediate down-type quark |∆F | = 2 processes and all five |∆F | = 1 processes, see again Appendix B.3. 6 We assume that in the VLQ scenario Q V the VLQ Yukawa couplings λ Vu i = 0, otherwise in this scenario also C Hu and C Hud must be considered. The |∆F | = 1 four-quark operators modify directly hadronic |∆F | = 1 processes, whereas they enter semileptonic |∆F | = 1 processes only via additional operator mixing in both SMEFT and phenomenological EFTs, therefore receiving another suppression in semileptonic processes. The 1stLLA contribution is a novel effect for |∆F | = 2 processes, where it competes with the direct one-loop box contribution in VLQ models discussed in Section 3.2. On the other hand, semileptonic and hadronic |∆F | = 1 processes are generated directly by ψ 2 H 2 D operators in the next matching step of G SM to phenomenological EFTs at µ EW (see Section 4 and Fig. 4), which are therefore enhanced in these processes compared to the 1stLLA contributions discussed here. Consequently, the 1stLLA is oneloop suppressed in VLQ models in hadronic |∆F | = 1 processes, unless the potentially novel chiral structure of the ψ 4 operators enhances a specific hadronic observable. We will return to this point in Section 4.3. Under the transformation from weak to mass eigenstates for up-type quarks (18) the corresponding ADMs of ψ 4 operators in Appendix B.3 transform as with up-type quark mass m k and the definition of CKM-products λ (t) ij given in (50). Since the ADMs are needed here for the evolution of dim-6 Wilson coefficients themselves, we have used tree-level relations derived from the dim-4 part of the Lagrangian only, thereby neglecting dim-6 contributions, which would constitute a dim-8 corrections in this context. In the sum over k only the top-quark contribution is relevant (m u,c m t ), if one assumes that the unitary matrix V is equal to the CKM matrix up to dim-6 corrections. 8 The |∆F | = 2 mediating ψ 4 operators involve the combination (42). We obtain via (39) and explicit matching conditions (105) with Λ m ij from (33), the chirality of the |∆F | = 2 operator and the VLQ-model-dependent factor We note the relations where the relative sign comes from relative signs in (127) and (128) when inserted in (30) and We point out the different flavour structure of the 1stLLA contribution (43) compared to the one of the direct box-contribution (31) discussed in the previous section Section 3.2: showing linear versus quadratic dependence on the product of VLQ Yukawa couplings Λ ij . A detailed comparison of both contributions is given in Section 5. Implications for the down-quark sector In the previous section the decoupling of the VLQs at tree-level and for |∆F | = 2 at one-loop level at the scale µ M has been presented, including the most important effects from the RG evolution down to the electroweak scale µ EW . In this section we discuss the decoupling of degrees of freedom of the order of µ EW by matching onto phenomenological |∆F | = 1, 2 EFTs. In the G SM -models these are the W and Z bosons, the top-quark and the standard Higgs h 0 that are all in the mass range µ EW ∈ [80, 180] GeV. In G SM models the Z and additional scalars are present, which we allow to be heavier, up to the ∼ 1 TeV range. For the purpose of the decoupling, however, we ignore this hierarchy with the heavy standard sector ∼ 100 GeV. In our analysis we will frequently use general formulae for flavour observables in models with tree-level neutral gauge boson exchanges that are collected in [35]. These formulae were given in terms of the so-called master one-loop functions which have been already used before in many concrete extensions of the SM, see [36] for a review. Therefore our task is to calculate NP contributions to these functions in the VLQ models, using the results obtained in the previous section. To this end it will be useful to adopt the notations of [35,36]. We define the relevant CKM factors by 9 We introduce further The relevant master functions in the SM are They are flavour universal and real valued. For completeness their explicit expressions can be found in the appendices. In the considered VLQ models new contributions not only break flavour universality, but also bring in new CP-violating phases, so that minimal flavour violation (MFV) is violated. |∆F | = 2 The Wilson coefficients 10 of |∆F | = 2 operators governing neutral kaon and B q -meson mixing (q = d, s), defined in Appendix C.1, can receive at the scale µ EW several contributions depicted in Fig. 3, depending on the model. Firstly, there are the local contributions, Fig. 3a, from the one-loop decoupling presented in Section 3.2, which are formally of order v 2 /M 2 , but one-loop suppressed. Secondly, there are also local 1stLLA contributions in G SM models due to top-Yukawa RG effects from ψ 2 H 2 D operators presented in Section 3.3, which are formally of order v 2 /M 2 ln(v/M ) and also one-loop suppressed. Thirdly, there are double-insertions of flavour-changing Z ( ) couplings, Fig. 3b, that count due to the double insertion formally as v 4 /M 4 , but are generated already at tree-level. Fourthly, when considering several VLQ representations also double-insertions of ψ 2 ϕ 3 -type operators [3], generating flavour-changing neutral Higgs exchange, can contribute in analogy to Fig. 3b when replacing the Z ( ) by h 0 . As a consequence in this case also non-vanishing contributions can arise to the operators O Sχχ,1 with χ = L, R and O LR,2 [32]. Unless we consider several VLQ representations simultaneously, new physics contributions from box diagrams, the top-Yukawa generated 1stLLA contributions in LH G SM models and the double-insertions of flavour-changing Z ( ) -couplings involve only the operators O ij VLL and O ij VRR . Below µ EW , they obey the same RG evolution (49) -with appropriate change of number of active quark flavours N f = 6 → 5 -and enter the M 12 element of the mass-mixing matrix as the linear combination with ∆S ij denoting VLQ contributions. The SM contribution is given at LO by S 0 (x t ), see (136). We have although in a given model only one of these contributions is present. If two different models containing LH and RH couplings are combined, the most important transitions in |∆F | = 2 are not these two operators, but O ij LR,1 and O ij LR,2 . The [∆S ij ] Vχχ with χ = L, R include quite generally box diagrams with VLQs and scalar exchanges, the top-Yukawa generated 1stLLA contributions in LH G SM models as well as tree-level Z and Z contributions. We can therefore write where C ij Vχχ (µ EW ) are given by (49) for χ = R or the sum of (49) and (43) for χ = L. The r V for V = Z, Z are NLO QCD corrections 11 to Fig. 3b from decoupling of the V boson at the scale µ = µ EW [38], Note the model-dependence of the factors ∆ ij χ (Z) and ∆ ij χ (Z ), given in Table 3 and Table 4, and the different dependence on the VLQ mass of these factors and C ij Vχχ (µ EW ). The top-Yukawa operator mixing generates in RH G SM models also LR operators for a single VLQ representation. When two or more representations are considered, also LR and SLL (SRR) operators contribute in principle. The Wilson coefficients of LR operators can receive contributions from box diagrams, top-Yukawa generated RG effects and tree-level Z ( ) exchanges, whereas SLL (SRR) and LR,2 from tree-level h 0 exchange. The results for all box contributions C ij LR,1 are given in formulae (31) and (34) and the RG evolution in (49), to which the top-Yukawa generated 1stLLA contributions (43) have to be added in RH G SM models. Adding the Z-and Z -contributions, one arrives at with the couplings ∆ ij χ (Z ( ) ) (χ = L, R) collected in Table 3 and Table 4. N ij is defined in (134). The RG evolution from µ EW to m b is done at NLLA accuracy for the SM contribution and LLA accuracy for the VLQ contribution. Semileptonic decays in the down-quark sector receive in VLQ models contributions via the Z and Z tree-level exchanges depicted in Fig. 4a. They lead to modifications of the Wilson coefficients of the corresponding phenomenological EFTs of d j → d i νν and d j → d i ¯ decays given in Appendix C.2 and Appendix C.3, respectively. All Wilson coefficients in this section are formally at µ EW , but since the corresponding operators are conserved currents under QCD, the RG evolution to the scale µ b is trivial in all cases. 12 The V = Z, Z contributions modify the Wilson coefficients and one-loop functions which enter the expressions for d j → d iν ν decays like K + → π + νν, K L → π 0 νν and also B → K ( * ) νν with more details in Appendix C.2. The Wilson coefficients of the operators entering the d j → d i ¯ transitions receive the following contributions where the leptonic Z couplings are taken to be the ones of the SM except for G SM (Φ) models, where Z − Z mixing is included following (24). There are no Z contributions to C 10 (10 ) , as the lepton couplings are vectorial, see (23). The purely leptonic decay K L → µμ is described by (s →d) It contributes at the scale µ EW to the Wilson coefficients of the QCD-and EW-penguin operators [40], The RG evolution induces also non-vanishing contributions for the remaining QCD-and EW-penguin operators at lower scales relevant for Kaon and B-meson decays. Here we are mainly interested in CP violation in the Kaon sector, especially ε /ε. It is known from various analyses of ε /ε, see [40] and references therein, that NP has to generate contributions to the Wilson coefficients of operators at the low energy scale in order to be able to modify significantly the SM predictions. This requires the presence of both LH flavour-violating couplings and RH flavour-diagonal couplings of Z or Z in the case of O 8 , or RH flavourviolating couplings and LH flavour-diagonal couplings in the case of O 8 . But in the models considered quark couplings of the Z are either LH or RH, hence such contributions can only be generated as a higher-order effect. Given that (V −A) and (V +A) flavour-diagonal Z couplings to SM quarks are always present, tree-level Z exchanges fully dominate. NP • Within the G SM -and G SM (Φ)-models, the pattern of NP contributions to ε /ε is as follows singlets : • In G SM (S)-models ε /ε remains SM-like, which could become problematic as we discuss briefly below. Tree-level Z contributions to ε /ε have been recently considered in detail in Ref. [40], where explicit expressions for the relevant hadronic matrix elements Q 8 (m c ) 2 and Q 8 (m c ) 2 can be found. Whereas these matrix elements differ only by sign from each other, their Wilson coefficients differ also in magnitude, the one of Q 8 being larger by a factor of c 2 W /s 2 W = 3.33. This can also be seen in Eq. (62), remembering that the Wilson coefficients of Q 8 and Q 8 at µ = m c are directly related to the Wilson coefficients of Q 7 and Q 7 at µ EW , respectively. Finally let us mention that the top-Yukawa generated 1stLLA contributions to |∆F | = 1 operators in G SM models discussed in Section 3.3 induce operators with the same chiral structure as already present from the Z-exchange due to ψ 2 H 2 D operators. In particular the ψ 2 H 2 D Wilson coefficients generate ψ 4 Wilson coefficients via the mixing given in (127) -(132) . Given their additional suppression w.r.t. existing contributions we do not consider these contributions further. This result differs by 2.9 σ from the experimental world average from the NA48 [42] and KTeV [43,44] collaborations, suggesting that models providing enhancement of ε /ε are favoured. A new analysis in Ref. [22] confirms these findings These results are supported by upper bounds on the matrix elements of the dominant penguin operators from the large-N c dual-QCD approach [21,45], which allows to derive an upper bound on ε /ε [20], still 2 σ below the experimental data. In particular it has been demonstrated in Ref. [45] that final state interactions are much less relevant for ε /ε than previously claimed in Refs. [46][47][48][49][50][51][52][53]. These findings diminish significantly hopes that improved lattice QCD calculations will be able to bring the SM prediction for ε /ε to agree with the experimental data in (68), motivating additionally to search for NP models capable of alleviating this tension. In fact it has been demonstrated that in general models with flavour-changing Z and Z exchanges [40,54], in the Littlest Higgs model with T -parity [55], 331 models [56,57] and supersymmetric models [58][59][60] agreement with the data for ε /ε can be obtained, with interesting implications for other flavour observables. We will see in Section 6 that also in VLQ models large NP contributions to ε /ε are possible, such that agreement with the data in (68) can be obtained with a significant impact not only on rare K decays but also B decays. Patterns of flavour violation Our analysis involves three model variants G SM , G SM (S) and G SM (Φ), with up to five VLQ representations. In this section we describe the patterns of flavour violation in |∆F | = 1, 2 FCNC processes in the Kaon and B d,s -meson sectors that can be expected in these models, based on our results in Sections 3 and 4. The quantitative phenomenology depends in addition to the NP parameters on the CKM and hadronic ones and will be discussed in the next section. However, on the basis of the information collected so far, some general patterns of flavour violation emerge and it is possible to state whether in a given model relevant NP contributions to a given observable can be expected. We hope that the collection of observations below will be useful in monitoring the numerical analysis of the next section. |∆F | = 2 In all models local VLQ contributions to |∆F | = 2 operators are generated at the VLQscale µ M via one-loop box diagrams. The contributions from tree-level exchanges of Z and Z at the scale µ EW are power-suppressed due to the hierarchy (14) and should be therefore numerically subleading, at least for large VLQ masses. This property decouples |∆F | = 1 and |∆F | = 2 contributions to some extent, rendering it easier to accommodate potential tensions [61,62] in ∆F = 2 processes. In G SM models additional contributions from four-fermion operators are generated through Yukawa RG evolution from µ M to µ EW . In the case of models Q V and Q d these contributions turn out to be dominant for µ M ≥ 1 TeV in the K meson system and very important in the B d,s meson systems. In the following we compare the various contributions one by one. The |∆F | = 2 box contributions given in Eq. (31) and (34) depend only on the VLQ mass(es) M and their Yukawa couplings λ VLQ i , but neither on the gauge couplings nor on the scalar sector. Moreover for a given VLQ-representation, they are equal in G SM and G SM (Φ) models owing to the equality of (8) and (10) upon H ↔ Φ. Hence the measurements of |∆F | = 2 observables will result for a given M in the very same constraints on λ VLQ i in both G SM and G SM (Φ) models. Using (55), the relative size of box-to-Z exchange in G SM and G SM (Φ) models is with η LL collected in Table 5, r Z ≈ 1, and a = 4 for T d and unity otherwise. While the Z contribution is comparable to the box contribution for M ≈ 1 − 2 TeV, it amounts only to a few percent for M = 10 TeV in G SM models, whereas in G SM (Φ) models the Z-contributions are suppressed by c 4 β . In G SM (Φ) models we have furthermore with r Z ≈ r Z ≈ 1. Therefore Z exchange might be more important w.r.t. the Z contribution for M Z < M Z , depending on r , see (26) but both are suppressed w.r.t. the box contribution. In the G SM (S) models the same picture holds qualitatively, however a Z-exchange is absent and the relative size of box-to-Z exchange is different, which for X = 1 reduces to the result in Ref. [9]. In contrast to G SM and G SM (Φ) models, we note the particular structure of Z couplings, not being suppressed by M 2 Z /M 2 Z . A lower bound on |X| v S = M Z /g 750 GeV exists in G SM (S) models, mainly from a combination of Z → 4µ and the neutrino trident production [9]. This implies that only for M 9 TeV the ratio (∆S) Box /(∆S) Z 1 and shows the numerical importance of the Z contributions, unless one considers much larger VLQ masses. With only these contributions taken into account the |∆F | = 2 observables are not sensitive to the chirality of the VLQ interactions as long as only one VLQ representation is present, because the contributions are additive as can be seen in (54). However, the inclusion of RG Yukawa effects and NLO contributions discussed in [18] changes this picture drastically in the case of G SM models with flavour changing RH currents (Q d , Q V ) and has also significant impact in the remaining three models with LH currents. In the case of D, T d and T u models we find with κ m given in (45) and η mm in Table 5. The NLO correction Hq ] mj + [C Hq ] im λ mj t (75) has been calculated in [18], where also the x t -dependent functions H 1,2 can be found. The result for H 1 (x t , µ EW ) in [18] has been confirmed in [63] where NLO corrections in the context of a general analysis of Z-mediated NP have been calculated, however in contrast to [18] leaving out RG effects above the electroweak scale represented by ln µ M /µ EW in (74) and (76). In the case of Q d and Q V models the box and RG contributions yield coefficients to different operators, hence a meaningful comparison of their impact on observables has to include their QCD running between µ EW and the light flavour scales (we choose 3 GeV for Kaons and M B for B d,s ) as well as the corresponding matrix elements. We find with R ij including RG factors and the ratio of the hadronic matrix elements. From Eqs. (60) and (61) in [18] we obtain This large chiral enhancement in the Kaon system renders the RG contribution dominant, while in the B d,s systems the contribution remains comparable with the box contribution. |∆F | = 1 In semi-leptonic |∆F | = 1 processes governed by d j → d i +( ¯ , νν), the VLQ contributions arise from tree-level Z exchange in G SM models, Z exchange in G SM (S) models and both in G SM (Φ) models. It is instructive to begin the discussion with G SM (S) models considered already in Ref. [9], as they involve only Z contributions to ∆F = 1 processes and the leptonic Z couplings have a special structure as given in Eq. (23). Moreover, as pointed out in that paper, the |∆F | = 1 contributions of VLQs in these models are independent of the scalarand gauge-sector parameters, in contrast to |∆F | = 2 contributions that depend on v S . We find the following pattern in NP contributions: • Due to the equality of the LH and RH Z couplings to leptons in (23), Z exchange does neither contribute to B s,d → µμ nor to K L → µμ. If future improved data will show the need for NP contributions to B s,d → µμ, this will be a problem for this scenario. • The crucial virtue of G SM (S) models, pointed out in [9], is the possibility of solving the LHCb anomalies; in particular, they can accommodate violation of leptonflavour universality (LFU). • In B → K(K * )νν only small contributions are possible due to cancellations among muon and tau contributions when averaging over neutrino flavours as a consequence of the U(1) Lµ−Lτ symmetry. • These cancellations are less efficient in K + → π + νν due to interference with the charm component, see Appendix C.2. Considering next G SM and G SM (Φ) models in which tree-level Z contributions to ∆F = 1 processes dominate, the most notable feature comes from the tree-level decoupling of the VLQs depicted in Fig. 1b, which implies a relationship between the flavour-changing Z and Z couplings in these models, again owing to the equality of (8) and (10) upon H ↔ Φ. Below the scale µ M in both models a ψ 2 ϕ 2 D operator is generated, with the same Wilson coefficient, where ϕ = H, Φ in G SM and G SM (Φ) models, respectively. The covariant derivative is the same in both models, up to the additional U(1) Lµ−Lτ part in G SM (Φ) models. Upon spontaneous symmetry breaking at the scale µ EW , this operator becomes ∝ v 2 in G SM models and ∝ v 2 (25), (22) and Table 4. Note that the additional modifications from Z − Z mixing in G SM (Φ) models do not affect the dependence on the λ VLQ i . The suppression by c 2 β can be only softened by going to very small tan β. In order to guarantee perturbativity of the top-quark Yukawa coupling 0.3 tan β [64]. In Appendix A.2 we discuss further constraints on tan β in G SM (Φ) models from the measured Z mass and partial widths to leptons, which for M Z < M Z allow at most 2 tan β, i.e. c 2 β 0.2. Depending on the choice of g and v S , this bound becomes even stronger. Therefore, VLQ effects in |∆F | = 1 FCNC processes are generically suppressed in G SM (Φ) models w.r.t. G SM models. As an example one might consider the Wilson coefficient C ij 9 given in (58), governing d j → d i ¯ . The suppression factor in G SM (Φ) versus G SM models is The mixing angle ξ ZZ ∼ M 2 Z /M 2 Z is small in most of the parameter space, such that (1 − 4s 2 W ) −1 ∼ 10 is overcompensated. The comparison of the first three terms with the last one in the brackets also shows the relative size of the Z to Z contribution in G SM (Φ) models, which is also suppressed by M 2 Z /M 2 Z . Consequently VLQ contributions to semileptonic |∆F | = 1 FCNC decays are in most cases suppressed in G SM (Φ) w.r.t. G SM models. However, there are exceptions related to the fact that with the parametric suppression of the Z and Z couplings, the values of Yukawa couplings are weaker constrained by ∆F = 1 transitions than in G SM models and the constraints on Yukawas are governed this time by ∆F = 2 processes. A detailed numerical analysis in the next section then shows that the allowed NP effects in ∆M K are in fact significantly larger than in G SM models. For a given flavour-changing transition the correlations between different |∆F | = 1 observables depend on whether Z ( ) have LH or RH flavour-violating quark couplings and the size of the corresponding leptonic Z ( ) couplings. A summary is given in Table 6, where in addition to G SM and G SM (Φ) models we include G SM (S) models discussed already above. The generically small NP contributions in C in G SM models are due to the smallness of leptonic vector Z couplings relative to the axial-vector ones. The additional generic suppression of NP effects in G SM (Φ) w.r.t. G SM is due to the aforementioned suppression by c 2 β . We observe that in G SM models significant NP effects in K + → π + νν, K L → π 0 νν, B s,d → µμ, B → K ( * ) µμ and B → K ( * ) νν are possible, but the LHCb anomalies in angular observables in B → K * µμ cannot be explained in these models because the vector coupling of Z to muons is suppressed by (1 − 4s 2 W ) ∼ 0.1 w.r.t. the axial-vector coupling of the Z. LFU of Z couplings precludes also the explanation of the violation of this universality in R K , hinted at by LHCb data. Due to the particular structure of Z couplings, the general pattern of NP contributions to K + → π + νν, K L → π 0 νν, B s,d → µμ, B → K ( * ) µμ and B → K ( * ) νν in G SM (Φ) models is dominated by tree-level Z contributions as in G SM models, but because of the aforementioned suppression by c 2 β these contributions are smaller, with few exceptions mentioned above, than in the latter models. On the other hand, the presence of Z with only vector lepton couplings allows in principle to address the LHCb anomalies more easily; however, given the generic suppression of the Z couplings, this is harder than in G SM (S) models. Hadronic |∆F | = 1 processes governed by d j → d i qq receive VLQ contributions only from tree-level Z exchange in G SM and G SM (Φ) models. The suppression of VLQ effects by c 2 β in G SM (Φ) models w.r.t. G SM models is the same as discussed previously for semileptonic |∆F | = 1 processes. Such contributions are entirely absent in G SM (S) models and ε /ε is generated for example in the case of d j → d i dd either by Z double insertions or via box diagrams, which are both additionally suppressed by |λ d | 2 compared to G SM models. Table 6: "DNA" table for NP contributions to the b → sµ + µ − Wilson coefficients C ( ) 9,10 and to the d j → d i νν ones C ν L,R . means that the NP contribution is potentially large, while stands for a generically small contribution, due to the suppressed vector couplings of the Z to leptons compared to its axial-vector couplings. Smaller symbols in the G SM (Φ) models indicate the general suppression by c 2 β w.r.t. G SM models. Determination of M There is a common claim that from flavour-violating processes it is only possible to measure the ratio g NP /M NP , where g NP is the coupling present in a given theory, while M NP is the NP scale. The scale tested by a given observable is typically quoted at the value of M NP when setting g NP = 1, and correspondingly changes when the latter is suppressed by some mechanism, as in the case of MFV. Here we would like to point out that in concrete models with correlations between |∆F | = 2 and |∆F | = 1 processes, it is in general possible to determine M NP without making any assumptions on the couplings involved. This is in particular important if M NP should turn out to be beyond the reach of direct searches at the LHC. In the context of 331 models the relevant correlations that allow the determination of M Z can be found in Section 7.2 of [37], although this point has not been made there. In order to illustrate this in the case of VLQ models we consider the G SM -models. Let us consider the example of ∆M s and first take into account for the shift ∆S only box contributions with VLQ exchanges. On the other hand, ∆Y entering the branching ratio for B s → µμ is governed by tree-level Z exchange. Then we find independently of Yukawa couplings and CKM parameters a useful formula: where η mm are given in Table 5 and b = 1 for D and Q V , b = −1 for T u and Q d and b = 1/2 for T d . Note that ∆S and ∆Y are generally complex but their phases are related so that r.h.s of this equation is real valued. Extracting ∆S and ∆Y from experiment, a range for M can be determined. This formula is modified in the presence of Yukawa RG effects and when the simple tree-level Z contributions cannot be neglected: • For sufficiently large M the Yukawa RG effects become important. As these contributions have the same dependence on the couplings as ∆F = 1 amplitudes and the dependence on the VLQ mass differs only by a logarithm, the determination of M will not be possible if the RG contribution dominates. However, we expect this situation only for RH G SM models in the Kaon sector, as explained above. If RG and box contributions are comparable, the determination of M will be possible, although the relevant expressions will be more involved than (79). • For sufficiently low M the tree-level Z contributions to |∆F | = 2 could become important and again dilute the sensitivity to M . However, if VLQs are not found at the LHC, the value of M is sufficiently large so that these contributions are numerically irrelevant. On the other hand, if VLQs are discovered at the LHC, we will know their masses and this determination will not be necessary -instead, the determination of the couplings would improve. In summary the determination of M outside the reach of the LHC will depend on the relevance of box contributions relative to the RG Yukawa effects. Unless RG contributions are clearly dominant, which is only the case in the Kaon sector for RH scenarios, this determination should be possible by means of a formula like (79). The determination is expected to work best for LH scenarios, but also for RH scenarios it should remain possible for b → d, s transitions, as discussed in the following section. Kaon and B-meson systems The correlations between flavour observables in different meson systems are governed by the Yukawa structure of the model in question, as will be elaborated quantitatively in Section 6. The important property of VLQ models is that the products defined in Eq. (33), together with the VLQ mass M determine at the same time the flavour-violating j → i couplings of Z and Z , as well as the flavour-diagonal Z couplings to quarks. The relevant flavour-changing parameters are hence Λ m ds in Kaon decays, and Λ m db , Λ m sb in b → d, s transitions of B mesons, respectively. Since only the relative phases of the λ VLQ i enter the Λ m ij , the phases ϕ m ij fulfill the relation dropping the index m of the VLQ representation for convenience. This leaves us with five parameters for the three complex quantities Λ ij . The phases ϕ ij can vary in the full range [−π, π], implying the occurrence of discrete ambiguities when determining them from experiment, as explicitly seen in the plots in Ref. [35] and in the plots in the next section. They can be resolved using observables where interference with the SM occurs. The absolute values λ VLQ i can be determined via Table 7: "DNA" of flavour effects in VLQ models. A star indicates that significant effects in a given model and given process are in principle possible, but could be reduced (see Section 6) through correlation among several observables. Empty space means that the given model does not predict sizeable effects in that observable. The star indicates left-handed currents and the star right-handed ones, smaller stars indicate the suppression of |∆F | = 1 decays in G SM (Φ) models. One might expect the strongest constraints numerically to stem from s → d processes, because of the strong suppression of the SM contribution by V td V * ts . In a sense, as more explicitly seen in the next section, the flavour structure of VLQ models has some parallels to the one in 331 models [37,56,57,65]. However, in 331 models the NP contributions are dominated by Z tree-level exchanges and once the constraints from B s,d observables are taken into account, NP effects in the K system are found to be small, with the exception of ε /ε. In the present analysis important Z boson contributions are present and this allows for more interesting NP effects than in 331 models in K + → π + νν and K L → π 0 νν. Furthermore, the partial decoupling of |∆F | = 1 and |∆F | = 2 processes due to the presence of important box diagram contributions to |∆F | = 2 processes in VLQ models discussed above modifies the corresponding correlations derived in Ref. [35], increasing the impact of |∆F | = 2 constraints on |∆F | = 1 processes relative to the one found in [35]. The latter is also true for RG effects in G SM models, specifically for RH scenarios, where the importance of ∆F = 2 can be drastically enhanced. In Table 7 we summarize the patterns discussed above. Numerics In this section we perform the numerical analysis of the VLQ models presented above. For this purpose we start by constraining the VLQ couplings by the available flavour data and if applicable also by data from other sectors. We proceed by presenting the predictions for a number of key observables given these constraints, including their correlations where they are sizeable. These fits are performed for different VLQ masses, in order to illustrate the explicit mass dependence of flavour observables discussed in Section 5.3. Model-independent constraints on ψ 2 ϕ 2 D operators have been derived from Z-and Wboson observables [66], which are applicable to G SM models. Although these constraints are not entirely independent from other operators, in VLQ-models the latter are loopsuppressed and can be neglected. The constraints on the modulus of the couplings are weak and of the order |λ i | M/(1 TeV). 14 More stringent constraints derive from |∆F | = 2, 1 flavour observables [11]. We constrain the five parameters |Λ ij | and ϕ ij (80) with the |∆F | = 2, 1 processes listed in Table 8. Master formulae used in these constraints are collected in Appendix C. The SM predictions in Table 8 are based on the determination of CKM parameters from a tree-level fit given in Table 13. Some comments regarding the included observables are in order: • The observable ∆M K does not provide constraints in G SM models and is omitted due to too large uncertainties from long-distance contributions in G SM (Φ) models. The prospects for controlling this long-distance part by lattice calculations are good [67] and in the future this constraint could play an important role. • We find that huge NP effects in ε /ε are not excluded by the constraints listed in Table 8 in G SM -and G SM (Φ)-models, such that we impose bounds on the NP contribution (ε /ε) NP itself in order to avoid showing predictions for other observables that are easily excluded by ε /ε, and to analyse its influence on the correlations of observables. This range roughly corresponds to NP required assuming present predictions from lattice QCD. We have checked that decreasing this range to [5,10] × 10 −4 as expected from the dual approach to QCD [21,45] would have only minor impact on the global fit as what matters is the unique selection of the sign of the relevant phase required for the enhancement of ε /ε. • Due to the sizeable experimental uncertainties, Br(B d → µμ) does not constrain the VLQ parameters further. It is thus omitted from the fit and we compare its prediction in our models to the present measurement. • A full analysis of B → K * ¯ is beyond the scope of this work. We do therefore not include the LHCb anomalies [68][69][70][71] in our fits. The analysis of b → s ¯ in G SM (S) models has been already presented in [9,15] and we have nothing to add here. In G SM models the shift in C 9 is too small to be relevant, while in G SM (Φ) models the effects are only moderately interesting and we will not address them here. The three sectors s → d, b → d and b → s are not independent, due to relation (81). In our analysis we show first the results separately for the three quark transitions and demonstrate in a global fit that K-physics constraints have an impact on B physics but not vice versa. G SM models In G SM models the absence of additional scalars allows to vary the mass of the VLQ's down to about 1 TeV without violating the hierarchy (14). The fits of the Λ ij for the three types of transitions j → i = {s → d, b → d, b → s} in G SM models are shown in Fig. 5 for M VLQ = 10 TeV and in Fig. 6 for M VLQ = 1 TeV for the single-VLQ scenarios D and Q V with LH and RH couplings, respectively. The plots for LH scenarios T u,d are qualitatively similar to D whereas the RH scenario Q d is similar to Q V . Quantitative differences arise due to changes of the sign in couplings and a factor 1/2 for T d w.r.t. D and T u , which are shown in Table 3. The statistical approach for these fits is detailed in Appendix D. We make the following observations: • All included observables are compatible with the SM prediction at 95% CL. Correspondingly also the global fit allows for the SM solution at 95% CL in all planes in both scenarios, except for Λ Q V bd with M Q V VLQ = 1, 10 TeV, where the SM is slightly outside that region. This is due to the slight tensions of ∆M d and Br(B + → π + ¯ ) with their SM predictions, which fortify each other in this case. [15,22] [purple]. and their interplay determines the global fit regions. For M VLQ = 1 TeV the global fit is almost completely determined by |∆F | = 1 processes in LH scenarios, but also in RH scenarios for b → d, s. On the other hand, K is a very powerful constraint in RH scenarios also for 1 TeV, due to the RG effects discussed above. This is in accordance with our previous discussion of the mass-dependence of these transitions. Specifically for K + → π + νν, large effects are excluded by K in combination with ε /ε and K L → µμ. Without the RG contributions, enhancements up to the present experimental limit would have been possible. • In b → s, the |∆F | = 1 observables distinguish between scenarios with LH and RH currents due to their different dependences on the corresponding Wilson coefficients, most importantly C 10 and C 10 , The consequence is shown in Fig. 5 and Fig. 6 where allowed regions almost overlap for LH scenarios, but intersect only around the SM for RH scenarios, thereby diminishing the size of potential VLQ effects in other b → s observables. The same observation holds for b → d transitions, which will help once B d → µμ is measured more precisely. In Fig. 11 we illustrate how Br(B d,s → µμ) can be used in a large region of parameter space to discriminate between LH and RH models. • In s → d transitions, the constraints from K , (ε /ε) NP and Br(K L → µμ) SD constrain the allowed values for ϕ sd . This in combination with the slight tensions especially in b → d leads to stronger constraints in the global fit compared to the fits for the individual transitions in b → d, s. As a consequence correlations between different transitions arise, but at the moment they are not very strong yet. This would change with significant measurements away from the SM for at least two of the transitions. • The |∆F | = 2 CP-asymmetric observables K and sin(2β d,s ) impose constraints in the complex Λ ij -planes, which are not limited along the direction corresponding to the SM phase. Such a limit is provided by ∆M d,s , whereas in the case of s → d the one from ∆M K is very weak and outside of the ranges shown. • There is a complementarity in the constraints from Br(K + → π + νν) and Br(K L → µμ) SD for every VLQ representation. Thus an improved measurement of Br(K + → π + νν) by NA62, which will operate until the LHC shut down in 2018 and aims at a 10% uncertainty [82,83], will provide stronger cuts into the allowed parameter space. On the other hand, while the constraints from (ε /ε) NP and Br(K L → µμ) SD are theoretically limited at present, they could become very powerful in the future if theory improves. Using the above constraints, we obtain allowed ranges for observables that are yet to be measured (precisely), listed in Table 9. We furthermore analyze patterns for each transition, that will help to distinguish VLQ models from other NP scenarios, and different VLQs from each other. In this respect we point out that models Q d and Q V have the (8). Still, in Q V models strong correlations between the up-and down-type sectors are not expected due to the in principle independent up-and down-type Yukawa couplings. SM SM In the Kaon sector, we make the following observations, see also Fig. 7: • The VLQ models allow to enhance ε /ε significantly, thereby addressing the apparent gap between the SM prediction and data, at the expense of suppressing Br(K L → π 0 νν). This suppression is significantly weaker for Q V and Q d models (RH currents) than for D, T d and T u (LH currents), in accordance with the general study in [40]. Simultaneous agreement with the data for ε K and ε /ε can be obtained without fine-tuning of parameters. • While the impact of ε /ε on K L → π 0 νν is large as stated above, K + → π + νν and ε /ε are only weakly correlated. However, in RH models K prevents large enhancements of Br(K + → π + νν), the maximal enhancement is about 50% of its SM value. In models with LH currents, a strong suppression is possible, and the SM value corresponds to an upper bound in this case when a stricter bound from K L → µμ is used. This implies that a measurement of a significantly enhanced Br(K + → π + νν), as presently still allowed by data, could exclude all G SM models with a single VLQ representation, although in models with LH currents a more conservative bound from K L → µμ would presently still allow the enhancement of Br(K + → π + νν) up to a factor of two. • In this context it should be again emphasized that the modes K + → π + νν and K L → µμ are strongly correlated in VLQ models, however, again differently so for LH and RH currents. While for RH currents one can easily infer the allowed range in one mode from a determination of the other, within the limited range allowed by ε /ε and K , LH-current models are more strongly constrained from K L → µμ. Progress for the latter mode depends solely on the capability to separate the long-distance contributions to this mode from the short-distance ones, since the relevant data are already very precise, see Appendix C. Note that there is basically no correlation between ε /ε and K L → µμ, as they are governed by imaginary and real parts of the corresponding couplings, respectively. • The VLQ mass does not have a large impact on all these correlations, as can be seen by comparing the lighter and darker areas in Fig. 7. The reason is in LH models that |∆F | = 1 transitions are the dominant constraints at both masses, rendering the allowed ranges for other |∆F | = 1 processes mass-independent. For RH models, the same conclusion is reached by considering additionally the fact that K is dominated by RG-induced contributions which scale similarly to |∆F | = 1 ones. Correlation plots for observables in b → s processes are shown in Fig. 8. We observe the following patterns: • Since NP effects in all three quark transitions are governed by different parameters, the slight tensions in |∆F | = 2 observables hinted at by new lattice data [61] can easily be removed in VLQ models. This is in contrast to constrained-MFV models, where ε K prohibits large effects in ∆M d,s [62]. • Br(B s → µμ) can be strongly suppressed below its SM value, as slightly favoured by experiment, while still allowing for sizeable NP effects in sin(2β s ), in particular in the case of models with LH currents. For M VLQ = 1 TeV |∆F | = 1 observables constrain the NP effects in φ s to be smaller than for larger VLQ masses. • Sizeable deviations from the SM prediction are still possible for the mass-eigenstate rate asymmetry A ∆Γ (B s → µμ) and the mixing-induced CP-asymmetry S(B s → µμ). Indeed, both can essentially vary in the full range [−1, 1] for LH models for M VLQ = 1 TeV. For RH models, A ∆Γ (B s → µμ) ≥ 50% for M VLQ = 1 TeV, but still |S(B s → µμ)| can reach up to 80%. For M VLQ = 10 TeV, the former is restricted to positive values in both LH and RH models, the latter slightly stronger constrained in RH models, but not in LH ones. Of course, the experimental measurements are very challenging for S(B s → µμ). We note that to very good accuracy A 2 ∆Γ +S 2 = 1, since the direct CP-asymmetry C(B s → µμ) is negligible. • CP-violating quantities are almost 100% correlated in b → s transitions as long as only one representation is considered. The reason is that the SM predictions are tiny and all NP contributions therefore directly proportional to the imaginary part of Λ bs , which hence cancels in the ratio of two CP-violating quantities. For small NP contributions, the asymmetries are simply proportional to each other, for larger effects the relation depends on the normalisation of the asymmetry. These statements hold not only in VLQ models, but in all models that provide only a single new phase in b → s transitions, only the proportionality constant changes in other models. • The imaginary parts of b → sµμ Wilson coefficients C 9,9 ,10,10 can give rise to naive T-odd CP-asymmetries A 7,8,9 in B → K * µμ that are tiny in the SM. 15 The rough dependences on the Wilson coefficients are [89] A 7 ∝ Im (C 10 − C 10 )C * 7 , A 8,9 ∝ Im C 9 C * 9 + C 10 C * 10 + . . . , where the dots indicate other numerically suppressed interference terms of C 9,9 with C 7 that are included in the numerical evaluation. The A 7 remains tiny at high dilepton invariant mass q 2 [90]. These CP-asymmetries have been measured in various q 2 -bins by LHCb [86] and we choose q 2 ∈ [1, 6] and [15,19] GeV 2 , which have smallest experimental and theoretical uncertainties. As can be seen in Table 9, the largest VLQ-effects in A 8,9 arise in RH G SM -scenarios Q d and Q V , almost independent from the VLQ mass and with a strong anti-correlation shown in Fig. 8. The potential size of VLQ effects exceeds slightly the current experimental uncertainties, specifically for the CP asymmetry A 7 in LH scenarios, such that improved measurements will provide additional bounds on VLQ couplings in the future, especially on their imaginary parts. A 7 is correlated with A 8 and anti-correlated with A 9 in RH scenarios, whereas in LH scenarios A 8,9 remain SM-like. • The decays B → K ( * ) νν are also sensitive probes of LH and RH NP effects due to Z-exchange and in order to exhibit these effects we consider the ratios [91] = which are unity and zero in the SM, respectively, and which determine the observables where κ η is form-factor dependent and given in Ref. [92]. The Belle II experiment is expected to measure these branching ratios with 30% uncertainty [93] if they are of the size as predicted in the SM. In RH scenarios large VLQ effects are excluded due to the strong complementarity of the |∆F | = 1 constraints from Br(B s → µμ) and Br(B + → K + µμ) as mentioned above. has to be larger than one in these cases. The VLQ effects for M VLQ = 1 TeV can lead to a rather large suppression in LH scenarios for while η = 0, leading to maximally correlated R B→K ( * ) νν . The suppression is smaller for M VLQ = 10 TeV, whereas R F L = 1. The correlation plot is shown in Fig. 9. It will be challenging to distinguish the small deviations from SM predictions in RH scenarios; however, large (suppression) effects are possible and LH and RH scenarios are well distinguishable. A measurement of significantly larger than one would challenge all G SM scenarios with a single VLQ representation. Similar correlation plots exist for b → d processes; however, given the CKM suppression of these modes compared to b → s, precision measurements in b → d ¯ and significant measurements of b → dνν processes are not expected in the next couple of years. Nevertheless, we illustrate in Fig. 10 the impact of more precise measurements in this sector exemplarily for Br(B d → µμ). All |∆F | = 1 processes depend only on the combination ∆ ij , see (22), of NP parameters; the allowed range predicted from one |∆F | = 1 process for another is therefore mass-independent, in contrast to the prediction from |∆F | = 2 processes. The present measurement from the CMS and LHCb collaborations is about 2σ larger than the SM prediction. As seen in Fig. 10 a confirmation of the present central value with higher precision would exclude LH G SM scenarios and yield at least an upper limit on M VLQ for the RH ones, in accordance with the discussion in Section 5.3. , in dependence on the VLQ mass. In dark red the constraint from |∆F | = 2 processes is shown, i.e. ∆M d and sin 2β, in purple the constraint from B + → π + µμ, and in orange their combination. The yellow band corresponds to the SM prediction, the grey one to the measurement by the CMS and LHCb collaborations [77]. All constraints correspond to 95% CL, only inner darker bands to 68% CL. G SM (Φ) model In G SM (Φ) models |∆F | = 1 transitions are suppressed by tan β compared to G SM models, such that |∆F | = 2 transitions dominate via the box contributions the constraints on VLQ couplings. In our numerical analysis of G SM (Φ) models we fix the parameters g = 1.5, X = 1, M VLQ = 10 TeV, and choose two benchmark points BP1 and BP2: BP2: The allowed regions of Λ ij in G SM (Φ) models correspond to the regions allowed by |∆F | = 2 constraints in G SM models given in Fig. 5. We find that |∆F | = 1 processes in Table 8 provide only tiny additional constraints in b → d, s and small ones in s → d, allowing thus in G SM (Φ) models much larger values for Λ ij compared to G SM models. The ranges still allowed for different observables with |∆F | = 1, 2 transitions are listed in Table 10, obtained by varying Λ ij within the 95% CL regions, neglecting theory uncertainties. For this purpose (ε /ε) NP has been restricted as given in Eq. (83) and we used here Br(K L → µμ) SD < 2.5 × 10 −9 . Notable features for the benchmark points are: • ε /ε can also be enhanced in G SM (Φ) models and thereby decrease the tension with the measurement. Especially in RH scenarios the constraint (83) that even larger effects are possible. The enhancement of ε /ε falls off fast for larger values of tan β and v S than in the benchmark points. • Whereas VLQ effects in Br(K L → π 0 νν) are small, Br(K + → π + νν) can still be enhanced over the SM prediction by a factor of two for LH and five for RH scenarios, while even larger effects are excluded by the upper bound on Br(K L → µμ) SD . Most notably, (∆M K ) SD can also be enhanced by a factor of more than two, in contradistinction to G SM models, where VLQ effects are tiny. The reason for this enhancement is the absence of strong constraints from |∆F | = 1 on the real part of Λ sd . Thus large (∆M K ) SD is independent of ε /ε, since the latter is sensitive to the imaginary part of Λ sd . This effect is enhanced with decreasing VLQ effects in |∆S| = 1 transitions as can be seen by comparing the results for BP1 and BP2. For these benchmark points large effects in Br(K + → π + νν) in RH models remain independently of whether a conservative or stricter bound on Br(K L → µμ) SD is used, although the stricter bound would force Br(K + → π + νν) to be above the SM prediction. But the large effects in Br(K + → π + νν) in RH models would be constrained by an improved theoretical prediction of long-distance effects in ∆M K from lattice. In LH models the upper and especially the lower bounds on Br(K + → π + νν) would also be constrained by the latter improvements and also improved bounds on Br(K L → µμ) SD . • The VLQ effects are small for Br(B s,d → µμ) and A ∆Γ (B s → µμ), as can be seen from Fig. 11 and Table 10, respectively, but can be still sizeable for S(B s → µμ). The CP asymmetries A 7,8,9 (B → K * µμ) can still be significantly enhanced over the SM to the percent level, but are a factor 2-3 smaller for BP1 than in G SM models, see Table 9. • VLQ effects in B → K ( * ) νν in G SM (Φ) models are smaller than in G SM models, at the level of only (10 − 20)% deviation from the SM predictions. Summary and Conclusions In this paper we have analysed flavour-violation patterns in the K and B s,d sectors in eleven models with vector-like quarks (VLQs). Five of them, called G SM -models, contain only VLQs as new particles. Two of them, called G SM (S)-models, have in addition a heavy Z and a scalar S. The final four of them, called G SM (Φ)-models, contain a heavy Z , a scalar S and a scalar doublet Φ. Our summary of patterns of flavour violation in these models in Section 5, accompanied by two DNA tables 6 and 7 and in particular our extensive numerical analysis in Section 6, see specifically tables 9 and 10, has shown that NP effects in several of these models can be still very large and that simultaneous consideration of several flavour observables should allow to distinguish between these models. This is also seen in Table 11, which shows that models with LH currents can be distinguished from models with RH currents through several observables. On the theoretical side our paper presents the first analysis of VLQ models in the context of SMEFT, which allowed to include RG effects from the NP scale M VLQ down to the electroweak scale, thereby identifying very important Yukawa enhancement of NP contributions to |∆F | = 2 observables in the Kaon sector through the generation of leftright operators with smaller, but significant effects in B s,d observables. These RG effects, relevant only in G SM -models, have been already identified in general Z models in [18], but in the present paper they could be studied explicitly in concrete models. The relevant technology is described in detail in [18] and in Section 3, Section 4 and Appendix B of the present paper. As our results have been systematically summarized in the previous section, we list here only the main highlights. Most interesting NP effects are found in G SM -models, even if they do not provide the explanation of the present LHCb anomalies. In particular • Tree-level Z contributions to ε /ε can be large, so that the apparent upward shift in ε /ε can easily be obtained, bringing the theory to agree with data. • Simultaneously the branching ratio for K + → π + νν can be enhanced over its SM prediction, but the size of the enhancement depends on whether RH currents or LH currents are considered. In models with flavour-violating RH currents, the maximal enhancement is limited to ∼ 50% of its SM value because of the strong constraint from K , caused by RG-enhanced contributions. In the LH current case an enhancement of K + → π + νν is only possible if the present conservative bound on K L → µμ is used. With the stricter bound only suppression of K + → π + νν is possible. On the other hand the positive shift in ε /ε implies uniquely the suppression of the K L → π 0 νν branching ratio. • Potential tensions between ∆M s,d and ε K can be easily removed in these models, since no MFV relation is imposed on the couplings. • Significant suppressions of the Br(B s → µμ) and of A ∆Γ (B s → µμ), in particular in models with LH currents, are possible. As far as Br(B d → µμ) is concerned, significant enhancements, in particular in the RH current scenarios, are still possible, as seen in Fig. 10 and Fig. 11. While such effects are also possible in 331 models, they cannot be as large as in VLQ models. • CP-violating effects for a given quark transition are strongly correlated in all of theses models, as long as only one representation is present, specifically for b → s, where CP violation in the SM is tiny. Having the LHCb anomalies in mind we have considered also VLQ models with a heavy Z related to U(1) Lµ−Lτ symmetry. Our finding are as follows: • The G SM (S)-models, considered already in Ref. [9], can explain the LHCb anomalies by providing sufficient suppression of the coefficient C 9 , but NP effects in B s,d → µμ and K L → µμ are absent, those in b → sνν transitions small and the ones in K + → π + νν and K L → π 0 νν much smaller than in G SM -models. Most importantly these models fail badly in explaining the ε /ε anomaly. • In the G SM (Φ)-models, the explanation of LHCb anomalies is more difficult than in G SM (S)-models, but this time, due to the presence of Z contributions, interesting effects in other observables can be found. • In particular, in contrast to G SM -models, the parametric suppression of Z couplings by tan β allows for increased values of Yukawa couplings that are this time mainly bounded by |∆F | = 2 transitions. • We find that NP effects in ε /ε and K + → π + νν can be large, the latter in contrast to G SM -models, and also the corresponding effects in ∆M K can be significantly larger than in G SM -models. This could appear in contradiction with the pattern in Table 7 and is the result of weaker constraints in these models. In particular if in the future the ∆M K constraint will be improved, such large enhancements of Br(K + → π + νν) are likely to be excluded. On the other hand NP effects in K L → π 0 νν, K L → µμ, B → K(K * )νν and B d,s → µμ are very small and beyond the reach of even presently planned future facilities. While effects in the CP asymmetries A 7,8,9 (B → K * µμ) are smaller than in G SM models, they might be still within reach of LHCb. Thus if NP will be found in B s,d → µμ and the ε /ε-anomaly will be confirmed by future lattice data, G SM -models would offer the best explanation among VLQ models. If, on the other hand, the LHCb anomalies will be confirmed in the future and no visible NP will be found in rare K decays, G SM (S)-models and G SM (Φ)-models would be favoured over G SM -models. A large enhancement of Br(K + → π + νν) would uniquely select RH G SM (Φ) models subject to the future status of ∆M K , although LH G SM and G SM (Φ) models could provide a moderate enhancement, in case of the latter depending on the theoretical treatment of K L → µμ. On the other hand, a large enhancement of Br(B → K ( * ) νν) would disfavour all considered models, at least with only one VLQ representation. Also the confirmation of all anomalies in combination with sizeable effects in e.g. Br(B d,s → µμ) would force us to extend the models analyzed by us by considering several VLQ representations simultaneously. We have also pointed out that in G SM (Φ)-models significant NP effects in ∆M K can be found, larger than in G SM and G SM (S)-models. While the discovery of VLQs at the LHC would give a strong impetus to the models considered by us, non-observation of them at the LHC would not preclude their importance for flavour physics. In fact, as we have shown, large NP effects in flavour observables can be present for M VLQ = 10 TeV and in the flavour-precision era one is sensitive to even higher scales. In this context we have pointed out that the combination of |∆F | = 2 and |∆F | = 1 observables in a given meson system generally allows to determine the masses of VLQs in a given representation independently of the size of Yukawa couplings. A.2 G SM (Φ) models The scalar sector in G SM (Φ)-models with one complex scalar S(1, 0, X/2) and the two doublets Φ 1 ≡ Φ(2, +1/2, X) and Φ 2 ≡ H(2, +1/2, 0) is given by with the potential We neglect kinetic mixing and parametrise the mass mixing via After partial diagonalization of the neutral gauge boson system, the Z and Z masses and their mass mixing are given by [96] with e = √ 4πα = g 2ŝW = g 1ĉW = g ZŝWĉW . The Z − Z mixing angle is small unless X becomes large. The diagonalisation of the neutral gauge boson mass matrix gives mass eigenvalues which differ from the ones in Eq. (101) by terms O(v 2 /v 2 S ). Note that we present only the solution for which M Z < M Z , i.e. throughout we will implicitly impose that the lighter mass eigenstate couples predominantly SM-like to quarks and leptons. As a consequence a lower bound on g will be obtained. On the other hand, the decoupling limit g → 0 is not excluded, but it will lead to M Z < M Z , i.e. that the heavier mass-eigenstate couples predominantly to SM-like fermions. The tan β dependence of M Z becomes irrelevant once v S 0.5 TeV. The mixing angle ξ ZZ can be suppressed with large tan β and M Z , since we work in the part of the parameter space, where the other possibility of g → 0 is not an option. In G SM (Φ)-models we make use of the fact that photon-and W ± -interactions to leptons are SM-like in order to determine the values of the fundamental gauge couplings g 1,2 and the VEV v from α e (M Z ), G F and the W -boson pole mass M W . As the remaining free parameters we choose tan β, g , X and v S , whereas dependent parameters are M Z,Z and ξ ZZ . Note that the latter depend only on the product g X, such that there are effectively only three parameters. We will restrict this parameter space to The lower bound on tan β guarantees perturbativity of the top-quark Yukawa coupling [64], whereas v S is bounded from above by the requirements (14) and yields M Z 1.5 TeV within the above limits. Constraints on these parameters arise from the measured value of M Z , which we impose with an error of δM Z = 5 GeV to account for the use of tree-level relations only. Further constraints come from the partial widths of Z → ¯ ( = e, µ, τ ), constraining the new physics contributions of the Z-lepton couplings (24) that depend on the ξ ZZ and g due to gauge mixing. We find a small mixing angle ξ ZZ 0.1 in the above specified parameter space of tan β, g X and v S if we impose the bound on new physics contributions to the partial widths of Z → ¯ from LEP [24], allowing for 5σ deviations from the measured central values, together with the bound on M Z . This justifies the expansion in the small mixing angle as done in Table 4. B VLQ decoupling and RG effects This appendix contains results of the Wilson coefficients of ψ 2 ϕ 2 D and ψ 2 ϕ 3 operators in G ( ) SM -EFTs after the tree-level decoupling of VLQs at the scale µ M . We provide further the relations to flavour-changing Z and Z couplings (20) and (21) after spontaneous symmetry breaking at the scale µ EW (neglecting self-mixing). B.1 ψ 2 ϕ 2 D operators The matching in G SM models at the scale µ M of order of the VLQ mass yields nonvanishing contributions for in agreement with [3], and analogously for G SM (Φ) models with H → Φ. The matching of G SM (S) models for VLQs D and Q V yields nonvanishing Wilson coefficients The flavour-changing Z and Z couplings (20) and (21) after spontaneous symmetry breaking are given in terms of the Wilson coefficients at the scale µ EW . In the case of G SM -models, the tree-level calculation of the processf i f j Z µ from G SM -EFT (15) yields with F H ≡ −2M 2 Z /g Z and generation indices i, j = 1, 2, 3. The variant of G SM (S)-models with the scalar sector of S and H generates only non-zero couplings to Z . We find for G SM (S)-models with the EFT-coefficients C i given in (105) and F S ≡ m 2 Z /(g X). The variant of G SM (Φ)models with the scalar sector of S, H and Φ generates non-zero couplings to Z and Z. The results for G SM (Φ) models are similar to G SM models, with the difference that they involve Z − Z mixings: where V = Z, Z and B.2 ψ 2 ϕ 3 operators We define the SM Yukawa couplings of quarks as in [26] Nonvanishing Wilson coefficients are generated also for ψ 2 ϕ 3 operators (see Table 2 for definitions) as a consequence of the application of equations of motion (EOM) in the treelevel decoupling of VLQs in Section 3.1. Due to the application of EOMs, these Wilson coefficients scale with the corresponding Yukawa coupling as Note the matrix multiplications w.r.t. the generation indices of Y u,d with the respective coefficients C HψD inside the brackets. The tree-level matching in G SM -models gives nonvanishing contributions at µ M to HqD ] ij = 0, in agreement with [3]. Analogous Wilson coefficients in G SM (Φ) are found by H → Φ. In G SM (S) models analogous relations hold with nonvanishing B.3 Top-Yukawa RG effects This appendix collects the ADM entries of the G SM -EFT proportional to the up-type quark Yukawa coupling Y u from [28], i.e. neglecting contributions from Y d,e . We list them only for operators that receive leading logarithmic contributions at the scale µ EW from the initial Wilson coefficients at the scale µ M of ψ 2 H 2 D and ψ 2 H 3 operators in the 1stLLA via direct mixing, see footnote 3. For convenience of the reader we keep here also C Hu and C Hud , which are absent in the VLQ models D, T u , T d , Q d , but contribute in Q V for λ Vu = 0. The H 6 -operator O H = (H † H) 3 receives direct leading logarithmic contributions 16 via C uH = 0 in models VLQ = T u , T d . The Wilson coefficent C H changes the Higgs potential and leads to a shift of the VEV [29]. The Their Wilson coefficients contribute to the Higgs-boson mass and the electroweak precision observable T = −2πv 2 (g −2 1 + g −2 2 ) C HD [29]. The ψ 2 H 3 -operators (see Table 2) have self-mixing for C uH,dH , and C uH mixes also into C dH,eH . They receive also contributions from C ψ 2 H 2 D . The C ψ 2 H 3 enter fermion-mass matrices (17) and lead also to fermion-Higgs couplings that are in general flavour-off-diagonal. The ψ 2 H 2 D-operators (see Table 2) show a mixing pattern among C (1,3) Hq as well as C Hq and C Hu . The latter implies that the LH scenarios D, T u , T d will generate via mixing also a RH coupling C Hu via C (1) Hq , which is however a one-loop effect compared to the effects of C (1) Hq . Both C Hd and C Hud have only self-mixing. In the case of ψ 4 -operators there are (LL)(LL) operators the (LL)(RR) operators [Ċ and the (RR)(RR) operators [Ċ (1) of which the ones relevant for |∆F | = 2 are given in (27) and (28). Hence there are two additional operators [O under the assumption C Hu = 0. C Master formulae for K and B decays C.1 |∆F | = 2 The effective Lagrangian for neutral meson mixing in the down-type quark sector (d jdi → d j d i with i = j) can be written as [34] where the normalisation factor and the CKM combinations are with ij = sd for kaon mixing and ij = bd, bs for B d and B s mixing, respectively. The set of operators consists out of (5 + 3) = 8 operators [34], which are built out of colour-singlet currents , where α, β denote colour indices. The chirality-flipped sectors VRR and SRR are obtained from interchanging P L ↔ P R in VLL and SLL. Note that the minus sign in Q SLL,2 arises from different definitions ofσ µν ≡ [γ µ , γ ν ]/2 in Ref. [34] w.r.t. σ µν = iσ µν used here. The ADM's of the 5 distinct sectors (VLL, SLL, LR, VRR, SRR) have been calculated in Refs. [33,34] at NLO in QCD, and numerical solutions are given in Ref. [97]. The NLO ADM's are also available for an alternative basis [98] with colour octet operators and analogous Q SRR,2 . In the SM only is non-zero at the scale µ EW , depending on the ratio x t ≡ m 2 t /M 2 W of the top-quark and W -boson masses. The |∆F | = 2 observables of interest ∆M K, B d , Bs , K and sin(2β d,s ) derive all from the complex-valued off-diagonal elements M ij 12 of the mass-mixing matrices of the neutral mesons [99,100]. For the latter we use the full higher-order SM expressions in combination with the LO new physics contributions. In particular for M ds 12 , we make use of NLO and in part NNLO QCD corrections η cc, tt, ct collected in Table 13 and for the hadronic matrix element of |∆S| = 2 operators the value ofB K . Concerning |∆B| = 2, we include the NLO QCD corrections η B to the SM and use for the hadronic matrix elements the latest results for F B d,s B B d,s [61]. The hadronic matrix elements of |∆S, B| = 2 of left-right operators are given in Table 14. C.2 d j → d i νν The effective Lagrangian for d j → d i νν (i = j) is adopted from Ref. [91], where the sums extend over a = {L, R} and neutrino flavour ν = {e, µ, τ } In the SM only has non-vanishing contribution at the scale µ EW , whereas C ν R = 0. The functions B and C depend on the ratio x t ≡ m 2 t /M 2 W of the top-quark and W -boson masses and enter as the gauge-independent linear combination X 0 (x t ) ≡ C(x t ) − 4B(x t ) [101,102], It is given by when including higher order QCD and electroweak corrections [103][104][105][106] as extracted in Ref. [107] from original papers. The theoretical predictions for b → sνν observables defined in Eq. (87) are based on formulae given in Ref. [92]. These expressions account for the lepton-non-universal contribution of VLQ's w.r.t. the neutrino flavour in G SM models. However, the particular structure of the gauged U(1) Lµ−Lτ (4) leads to a cancellation of the numerically leading interference contributions of the SM and new physics [9]. The Br(K + → π + νν) receives in the SM the numerically leading contribution from the "top"-sector, when decoupling heavy degrees of freedom at µ EW , which yields directly the local O sd,ν L operator (ν = e, µ, τ ). Further, a non-negligible "charm"-sector arises from double-insertions of hadronic and semi-leptonic |∆S| = 1 operators when decoupling the charm quark at µ c ∼ m c , which is enhanced due to the strong CKM hierarchy (λ sd ∝ λ 2 ), where λ = |V us | is the Cabibbo angle. This is usually expressed in the effective Hamiltonian of the SM as [108] with N = G F α e /(2 √ 2πs 2 W ), where X e c = X µ c = X τ c . The NP contributions in VLQ-models cannot compete with the SM contribution to the tree-level processes entering the "charm"-sector, since they are suppressed by an additional factor (M W /M VLQ ) 2 . In consequence, NP contributes to the "top"-sector only with X sd,ν L,R given in Eq. (57), such that the top-sector becomes neutrino-flavour dependent. The experimental measurement averages over the three neutrino flavours, with the assumption that λ where λ = 0.2252 has been used in Ref. [107]. The factor contains the experimental value Br(K → πeν e ) and the isospin correction r K + and has been evaluated in Ref. [110] (table 2) including various corrections. Further ∆ EM = −0.003 for E γ max ≈ 20 MeV [110]. If one takes into account the different value of s 2 W = 0.231 taken in Ref. [110] compared to our value in Table 13, then κ + = 0.5150 × 10 −10 (λ/0.225) 8 . The sum (144) contains the SM contribution and further the interference of SM×NP and NP×NP. Besides P c at NNLO in the SM contribution, the NLO numerical values for µ c = 1.3 GeV are used for the interference of SM×NP. The branching fraction of K L → π 0 νν is obtained again by averaging over the three neutrino flavours Br(K L → π 0 νν) = κ L λ 10 with κ L = κ + r K L r K + τ K L τ K + = 2.231(13) × 10 −10 λ 0.225 8 . ( The numerical value is from Ref. [110] (table 2) and it decreases to κ L = 2.221 × 10 −10 (λ/0.225) 8 when rescaling with our value of s 2 W . C.3 d j → d i ¯ The effective Lagrangian for d j → d i ¯ (i = j) is adopted from Ref. [111], were the sum over a extends over the |∆F | = 1 operators whereas scalar O S,P(S ,P ) and tensorial operators O T(T5) are not generated in the context of VLQ models. In the SM the only non-zero Wilson coefficients, are lepton-flavour universal and also universal w.r.t down-type quark transitions, as the CKM elements have been factored out. All other Wilson coefficients vanish at the scale µ EW . The functions B, C, D depend again on the ratio x t ≡ m 2 t /M 2 W of the top-quark and W -boson masses and give two gauge-independent combinations Y 0 (x t ) ≡ C(x t ) − B(x t ) and Z 0 (x t ) ≡ C(x t ) + D(x t )/4, that are given in the SM as In the predictions of Br(B d,s → µμ) and the mass-eigenstate rate asymmetry A ∆Γ (B d,s → µμ) we include for the SM contribution the NNLO QCD [112] and NLO EW [39] corrections, whereas NP contributions are included at LO. The values of the decay constants F B d,s are collected in Table 13. The branching fractions Br(B + → (π + , K + )µμ) at high dilepton invariant mass q 2 are predicted within the framework outlined in Refs. [113][114][115]. We neglect contributions from QCD penguin operators, which have small Wilson coefficients and the NLO QCD corrections to matrix elements of the charged-current operators [116,117], but include the contributions ∼ V ub V * ud(s) . The form factors and their uncertainties are adapted from lattice calculations [118,119] for B → π and [120] for B → K with a summary given in [121]. We add additional relative uncertainties of 15% for missing NLO QCD corrections and 10% for possible duality violation [114] in quadrature. The predictions for observables of B → K * µμ are based on Refs. [89] and [122] for lowand high-q 2 regions, respectively. The corresponding results for B → K * form factors in the two regions are from the LCSR calculation [123] and the lattice calculations [124,125]. The measurement of Br(K L → µμ) provides important constraints on its shortdistance (SD) contributions, despite the dominating long-distance (LD) contributions inducing uncertainties that are not entirely under theoretical control. In particular there is the issue of the sign of the interference of the SD part χ SD of the decay amplitude of K L → µμ with the LD parts. Allowing for both signs implies a conservative bound |χ SD | ≤ 3.1 [74]. Relying on predictions of this sign based on the quite general assumptions stated in [74,126,127] one finds −3.1 ≤ χ SD ≤ 1.7 which we employ in most of this work. Note, however, that a different sign is found 17 in [126,128], implying −1.7 ≤ χ SD ≤ 3.1. In light of this situation, we comment on the impact of the more conservative choice where appropriate, which includes both sign choices. C.4 d j → d i qq and ε /ε The effective Lagrangian for d j → d i qq (i = j) is adopted from Ref. [129], where the definition of the operators can be found and here we restrict ourselves tos →d, i.e. ij = sd. At the scale µ EW (N f = 5) it reads where O (c) 1,2 denote current-current operators. The sum over a extends over the QCDand EW-penguin operators and we included their chirality-flipped counterparts O a = O a [γ 5 → −γ 5 ]. Thereby we assume that VLQ contributions to other operators are strongly suppressed. The Wilson coefficients are denoted as z a , v (NP) a and v a , taken at the scale µ EW . For the SM-part, CKM unitarity was used, and we introduced a new physics contribution v NP a as shown above, which is related to the VLQ-contribution (62) as v NP a = C sd a , v a = C sd a . The RG evolution at NLO in QCD and QED leads to the effective Hamiltonian at a scale µ µ c ∼ m c (N f = 3) after decoupling of b-and c-quarks at scales µ b,c [129], where y a ≡ v a − z a and all Wilson coefficients are at the scale µ. The contributions of new physics can then be accounted for in ε /ε by the replacement where the minus sign is due to (ππ) I |O a |K = − (ππ) I |O a |K for the pseudo-scalar pions in the final state [130]. For the readers convenience we provide a semi-numerical formula for ε /ε with initial conditions of Wilson coefficients from new physics in QCDand EW-penguins a = 3 ( ) , 5 ( ) , 7 ( ) , 9 ( ) at the electroweak scale µ EW : ε ε = −2.58 + 24.01B (1/2) 6 − 12.70B The coefficients are P a = p (0) a + p (6) a B (1/2) 6 with p (n) a given in Table 12, where the last column gives P a for B (1/2) 6 (µ) = 0.57 and B D Statistical approach and numerical input The input quantities included in our analysis are collected in Table 13 and Table 14. The CKM parameters have to be determined independently of contributions from the VLQs. The "tree-level" fit carried out by the CKMfitter collaboration achieves such a determination, taking only measurements into account that are unaffected in our NP scenarios, i.e. (semi-)leptonic tree-level decays, tree-level determinations of γ and B → ππ, πρ, ρρ, used as a constraint on γ. The results of this fit are again quoted in Table 13. As a statistical procedure, we choose a frequentist approach. The fits include as parameters of interest the VLQ couplings and in addition nuisance parameters, which constitute theoretical uncertainties. The nuisance parameters are listed in Table 13 and consist of • CKM parameters from a "tree-level" fit 18 ; • hadronic parameters: decay constants, form factors, |∆F | = 2 hadronic matrix elements. (24) Table 13: Values of the experimental and theoretical quantities used as input parameters as of March 2016. * : Calculated by demanding that the uncertainty of the ratio of the decay constants given above should equal the uncertainty given explicitly for the ratio, also given in Ref. [131]. * * : Calculated from information given in Ref. [61]. Note that their assumption for the SU (3) [61] and [140] for correlations. For the Kaon system threshold crossings to N f = 4 and N f = 3 have been chosen as 4.18 GeV and 1.4 GeV. The chirality-factor is given as r ij χ = (M M ij /(m i (µ low ) + m j (µ low )) 2 . See also [18] for more details on M ij 12 .
29,723.4
2016-09-15T00:00:00.000
[ "Physics" ]
Waqf Sustainability or Sustainable Waqf? A Bibliometric Analysis Research on waqf sustainability is increasing in popularity, showing exponential growth in publication and citation numbers. The realm of research has grown intricate and fragmented, thereby posing a growing challenge to the regulation of waqf sustainability. The main purpose of this study is to organise and integrate the preliminary studies on the theme of waqf sustainability. To this end, this study involved bibliometric analysis, distinguishing it from previous analyses, which were outdated and/or different focus. We collected 84 articles extracted from Scopus and Web of Science (WoS) databases, covering 20 years from 2001 to 2022. The findings showed that the most prolific authors were from Malaysia. There are five research themes regarding waqf sustainability, including the accountability of Islamic social finance as a third-sector economy, the sustainability of Islamic microfinance, the role of intellectual capital in waqf institutions, the effectiveness of management, and the performance measurement of waqf institutions. This study shows that the performance of waqf institutions for waqf sustainability is scant. Hence, there is an important research gap that can be addressed in future research since sustainability is a priority agenda as outlined in the Sustainable Development Goals (SDGs) blueprint. INTRODUCTION Waqf is one of the developmental instruments of Islamic economics and is considered highly relevant to improving socioeconomic development (Qurrata et al., 2021;Zauro et al., 2020).To strengthen the role of waqf in socio-economic development, many waqf managers (Nazir -in Arabic) in various countries have created funding schemes that lead towards waqf contributions (Qurrata et al., 2021;Mohsin et al., 2019;Thaker et al., 2021;Salleh et al., 2020).In addition, the existing studies conclude that the essential characteristics of waqf properties are aligned with the term sustainability, which means resilience, stability, and permanence.To keep the characteristics alive in the waqf, the waqf managers certainly have an essential role to play in ensuring the sustainability of waqf. Hence, sustainable waqf needs two critical components: assets (property and cash) and management characteristics.The failure to provide those primary components and the ideal situation leads to criticisms of the waqf institution's current state: ineffective management of waqf and information unavailability related to waqf (Sulaiman & Zakari, 2015). Both above reflect the absence of a good governance system (Prasad, 2003), which results in waqf managers struggling to determine measurements for their performance.The performance that should be assessed in waqf institutions is not only based on economic indicators but also qualitative measurements that include the institution's growth, effectiveness, transparency, and sustainability (Noordin et al., 2017).In the Western world, waqf institutions are classified as non-profit organisations, attracting much attention among scholars recently (e.g., Qurrata et al., 2021;Azrak, 2022;Amin et al., 2023).Although the donation potential of waqf institutions as non-profit organisations has been escalating, there are problems with the sustainability of waqf institutions due to major criticisms of these institutions. The sustainability of an institution can be solved by performance measurement, which so far has only been "top-down", meaning that this measurement is determined by the highest authorities and not based on the organisational culture (Lewis, 2003) and voluntary norms (Siraj, 2012) of the waqf institution itself.Sustainability in waqf institutional management means that the results of waqf management will benefit the beneficiaries beyond the program's life (Lewis, 2003).Institutional sustainability in a program setting requires three interrelated issues (Cannon, 2002).First, financial sustainability is necessary to analyse the project's ability to continue generating income from waqf management so that, over time, it will become less dependent on grantors.The existence of targeted performance indicators will provide the institution with the regulatory capacity to continue to provide benefits to the beneficiary community over time.Ultimately, the benefits provided can be sustained. Since the continuous escalating studies on waqf, the domain of institutional waqf has evolved into a wide-ranging, intricate, and disjointed area of research, making it progressively challenging to comprehend.A large body of literature has explained the socioeconomic role of institutional waqf (e.g., Awaludin et al., 2018;Laallam et al., 2022;Mohd Sharip et al., 2022).A great deal of research conducts bibliometric analysis in the waqf literature (e.g Alshater et al., 2022;Uluyol et al., 2021;Rusydiana, 2019;Ninglasari, 2021).As we know, the sustainability of waqf institutions must be assessed to know the effectiveness and the ability of the institutions to manage the waqf asset.However, there is a short supply of waqf institution performance measurement literature. The discussion of performance measurement for waqf institutional sustainability requires bibliographical evaluation based on empirical bibliometric data.Therefore, this pursues three primary research purposes.First, this study aims to identify the key researchers, key journals, and key researchers in the field of waqf and waqf institutions.Second, this present research is intended to organise and integrate the highly cited preliminary studies on waqf and institutional waqf.Third, it provides a research agenda regarding waqf and waqf sustainability.To accomplish these goals, this study involved bibliometric analyses derived from statistics pertaining to a particular publication. Research Design Methodology which has been employed in this review study was designed to exploring an eclectic body of research concomitant to studying performance measurement in the waqf institutional sustainability context.Thus, this study has refined through several stages with the notable criteria to find relevant and reputable publications for meta-literature review.During the designing process on the performance measurement on waqf institutional sustainability literature review, this study learns more from several papers on the methodology especially in bibliometric analysis.Donthu et al. (2021a), Rogers et al. (2020), Chen (2017), and Mukherjee et al (2022) are the main papers to ensure the selection and procedures the bibliometric analysis.This paper uses bibliometric analysis to unveil emerging article and journal performance trends.In addition, this paper will also explore the knowledge gaps of a particular theme in the existing literature (Donthu et al., 2021b;Donthu et al., 2020). Bibliometrics and Procedures Bibliometric analysis is a type of systematic literature review (Fan et al., 2022;Lim et al., 2022) that uses quantitative applications and statistical techniques on bibliographic data (Donthu et al., 2021b).As a result, bibliometric results are more objective and broader in scope than other types of review research (Donthu et al., 2021b).Donthu et al. (2021a) and Chen (2017) explain that the data in the bibliometric analysis is enormous (hundreds, even thousands) and objective in nature.The recommended sample size for this type of bibliometric research is a minimum of 200, which will show considerable differentiation (Rogers et al., 2020).A sample size of 50 to 100 is considered less accurate in identifying bibliometric research, but it still needs to provide valuable information to skilled and experienced peers. Due to the large amount of initial data obtained, it is necessary to decompose and map based on keywords, clusters, and countries.Furthermore, to form and evaluate bibliometric research papers, Mukherjee et al. (2022) mentioned seven factors that must be fulfilled.The seven factors are novelty, value, importance, time, exposition, rigor, and complexity.These seven factors are important to fulfil because they can support researchers in building bibliometric studies that will be conducted.After fulfilling the seven factors to build bibliometric studies, the next thing to do is to determine the main objectives and scope of the research.Furthermore, after establishing the main objective, determining keywords is an important thing to do to find codes.Code search is part of the data collection stage, with five stages passed (Chen, 2017). We used a five-step methodological process (Figure 1) adapted from Alshater et al. (2020), Chen (2017) and Misbah et al. (2022).The first step is to define the main objective and scope of the review.After forming the main objective, we dug deep for a combination of results from searching for the topic on the Web of Science and Scopus online database.Given the importance of mentioning the search engine, it is due to literature searches requiring relevant and reliable sources (Denney, 2013;Chen, 2017;Misbah et al., 2022;Radu et al., 2021).Therefore, this process will provide a lot of article results.Then, an extraction method is needed in the form of literature source selection to reduce articles that are not relevant to the theme of the paper.Article reduction is based on years, document type, subject area and language.This stage is important because it is the main foundation of the research. ANALYTICAL TOOLS In Table 1, we can see the codes used for searching in this study are as follows: ("waqf sustainability" OR "waqf institution performance" OR "non-profit organisation performance").The codes were typed using asterisks to include all variations in the expression.All searches were conducted on the online databases Web of Science and Scopus.The search was conducted in November 2022, with Scopus retrieving 65 articles and Web of Science retrieving 46 articles for a total of 113 (see Table 2).After obtaining 113 articles, not all articles obtained will be used, but the first is a sorting process that focuses on the title of the article and the year of publication for the last 22 years (2000 -2022).The selection of 22 years of publication was made because the total number of articles was limited, so the range of publication years became longer.The sorted articles were also narrowed down by document type, with only journal articles selected.Journal articles that have been selected are filtered again on the subject area by selecting social science, economy, business, arts, and humanities.Furthermore, we manually sorted because there was double data due to the article being indexed by Scopus and Web of Science (WoS).Therefore, 84 articles were pulled from Scopus and WoS using the Bibtex format.The analysis used two science mapping tools, VOSviewer and Excel.Microsoft Excel was used before the data was entered in VOSviewer. Redundant data were filtered with the help of Excel.We used VOSviewer to draw bibliometric networks and content analyses (Van Eck & Waltman, 2010). In bibliometric research, there are two categories of maps which widely used.These two categories are distance-based maps and graph-based maps.VOSviewer is categorized as distance-based maps.Maps that are distance-based show how strongly two elements are related to one another depending on their distance from one another.In general, a closer relationship is indicated by a smaller distance.In distance-based maps, things are sometimes distributed rather unevenly.This helps to quickly spot groups of related objects, but it can also make it challenging to label every item on a map without having labels that run over one another.VOSviewer can provide a map in nearly four displays, mention to as the label outlook, the density outlook, the cluster density outlook, and the scatter outlook.Therefore, it is functionally for furnishing wide-ranging bibliometric maps in a digestible way.The display above shows the network from many items (van Eck & Waltman, 2010).The relationships depicted in this network can be built by selecting authors, sources, countries, and keywords.The results from VOSviewer were analysed for scientometrics using citation, co-citation, co-word and co-authorship analysis with cluster density. RESULTS AND DISCUSSION A summary of the review is presented in Table 3.This table provides key information on 84 documents bridged over 21 years.This number comes from 52 Scopus and WoS-indexed journals.Of the 84 documents, there were 79 articles, three book chapters and three review papers.Twenty per cent of the authors of the total 84 documents after filtration were multiple authors.The exploration presented includes 399 keywords.Figure 2 shows the distribution of 84 documents published between 2001 and 2022.The number of published papers rocketed by 22 per cent from 2015 to 2022, indicating that the performance of waqf institutions in achieving sustainability goals is a topic of interest.Source: Authors' estimation. Figure 2 Article Growth by Year Source: Authors' estimation. The rise in the number of articles in this field is analogous to the scientific community of the authors.The annual increase in the number of papers on waqf shows that the crucial role of waqf as a social finance tool is starting to be seen by many researchers (Alshater 2021).The attention of researchers on waqf has increased in line with the growing attention to social welfare and sustainability issues discussed in the Sustainable Development Goals (SDG) (Aldeen, 2021).Figure 3 shows the most prolific authors in this field.Noordin and Mohd Thas Thaker are the most influential authors, with three papers published, followed by Faizah and Abdullah, each with two manuscripts.However, this paper only shows four authors, as the other authors only have one manuscript. Citation Analysis Citation analysis is used to analyse the relationship between publications by identifying the publications most impacting the research area.Citation reflects the intellectual relationship between publications.It is determined by the number of citations received (Appio et al., 2014).The data required for citation analysis are author name, citations, title, journals, DOI, and References (Donthu et al., 2021).Although citation analysis is the most fundamental technique in science mapping, it is still reliable to determine how important a publication theme is in the research area.Furthermore, to recognise this, the most objective and direct measure of impact is citation analysis (Pieters & Baumgartner, 2002;Stremersch, Verniers, & Verhoef, 2007).This segment provides citation analysis of sources, citations, documents, and author impact. Table 4 presents the author's impact so that we know that Darus Faizah contributed the most to this topic by publishing 41 documents.She started publishing in 2004 and has 396 citations on Scopus.However, Mohd.Thas Thaker MA got the highest Scopus h-Index compared to the other authors.He received a score of 11, meaning that the author is considered productive, and his 31 published documents impact the documents of other scientists or authors. Furthermore, Table 5 reveals the top ten most cited papers on the sustainability of waqf institutions as non-profit organisations.There is no intellectual dominance of authors on this topic.However, considering the impact of the source, we know that three of the ten most cited documents have been published in the Journal of Islamic Accounting and Business Research.Furthermore, 2 of the ten most cited documents have been published in the International Journal of Economics Management and Accounting. The article by Laallam et al. (2022) is the most cited on the sustainability of waqf institutions as non-profit organisations because they raise the issue of intellectual capital on organisational performance using mathematical models.Laallam et al. (2022) criticised the lack of focus on intellectual capital in non-profit religious organisations.They found that human capital, structural capital and spiritual capital impact organisational performance but not waqf institutions.Meanwhile, Sharip et al. (2022) discussed the effectiveness of management and leadership style through motivation in waqf institutions.The similarity of these two studies is that they discuss the management of waqf institutions as non-profit organisations for the institution's sustainability. Most Productive Countries Tracking the country of origin of publications on the sustainability of waqf institutions as non-profit organisations is essential because it is related to the authors' origin and affiliations.Then, to evaluate the performance of contributing publications by country from 2001 to 2022, the metrics revealed the five most productive countries.Table 6 provides precise data that Malaysia is where authors in the sustainability of waqf institutions thrive.This fact is not surprising given the dominance of Malaysian authors on this topic, as waqf institutions are thriving, a fact that can reinforce and increase the relevance of this research.The commitment of Malaysian scholars and their educational institutions is undoubted in the development of waqf, given that many large Islamic financial institutions have their headquarters in Malaysia, which makes waqf-related research even more desirable (Aldeen, 2021).In addition, the many documents Malaysian authors produce the result in high citation rates.Figure 4 clearly shows that Malaysia conducts most research on waqf institutions. Co-citation analysis Co-citation analysis is used to analyse publications frequently co-cited within the same domain (HjØrland, 2013).This analysis reveals the scholarly structure of a research field (Rosetto, Bernandes, Borini, & Gattaz, 2018).In a co-citation relationship, two publications are linked together when they appear in the bibliography of another publication.Co-citation serves to find the most influential publications so that researchers can find thematic clusters.The determination of thematic clusters is derived from the number of cited publications (Donthu et al., 2021a).It suits researchers who want to uncover the basis of publications and knowledge. Figure 5 allows us to explore the keywords frequently used by authors from year to year.The colour indicator in figure 5 indicates the year.In 2018, in dark purple, the most frequent keywords were sustainability, Malaysia, micro-enterprises, corporate social responsibility and management.The turquoise colour shows that in 2019, the most used keywords were waqf, zakat, Islamic finance, cash waqf, and thematic analysis.The keywords mentioned in 2020 are Islamic philanthropy, case study and performance measurement.The green colour shows that the most frequently written keywords in 2021 are crowdfunding, trust, accountability, intellectual capital, and Islamic social finance.Meanwhile, in 2022 the latest keywords that are most documented are motivating language, Islamic accounting, and governance.Governance is part of the institutional theory that explains how an entity controls and manages the system.Therefore, the sustainability of waqf institutions as non-profit organisations is something new in the scope of governance in waqf institutions. The network in Figure 6 indicates their relationship.We set the citation limit at 10, dividing the 84 articles into seven clusters.Using VOSviewer, we obtained seven clusters that can identify from the cocitation analysis disclose the information that waqf-themed papers have close relationships with various themes.In this case, waqf is related to sustainability, governance, Islamic Social Finance, and performance measurement.At the same time, the sustainability theme has a relationship with cash waqf, intellectual capital, Malaysia and accountability. Figure 6 Author Keywords Cluster Source: Authors' estimation. Co-Authorship Analysis We conducted cluster exploration by interrogating coauthorship results with social network analysis techniques.This method in mapping science can help decipher social patterns, such as relationships between authors and their affiliations (e.g., countries) and relationships between authors, which reflect the characteristics of relationships between groups of authors (Mukherjee et al., 2022).Furthermore, thriving understanding and knowing the words and language that individuals frequently share in these clusters is important (Nerur et al., 2008), and we can identify the social processes that coproduce, sharing and pervasiveness of knowledge within and between different clusters. Our process of co-authorship is divided into two analyses.First, we analysed co-authorship using countries as the unit of study.Secondly, we set a minimum number of documents for each country and a minimum number of citations capped at '1' for a complete analysis of the origin of the document sources.These parameters resulted in five clusters.Figure 7 provides the results of the coauthorship analysis using the country as the unit of analysis.The five clusters are divided into 'red' consisting of Malaysia and Brunei Darussalam, 'green' representing Indonesia, 'yellow' describing Oman, 'blue' representing Morocco, and 'purple' reflecting Turkey.The research trend from year to year is illustrated in figure 8 Furthermore, we performed the same steps as the co-authorship by countries analysis to analyse the co-authorship by authors' year-toyear trend.The results of the co-authorship by authors analysis show the social relationships between authors.Figure 9 Cartography Analysis Using Keywords Occurrence We found contradictory results between co-citation and coauthorship.While the co-citation results found 7 clusters, the coauthorship results indicated 5 clusters for the 84 waqf papers selected in this bibliometric analysis.Therefore, we conducted Cartography Analysis Using Keywords Occurrence to see the characteristics of research themes based on keywords in articles.This analysis was performed by Alshater et al. (2021) because, using co-occurrence, we can select all keywords as the unit of analysis.Furthermore, cooccurrence analysis explains potential relationships between research units, so the results found are more related than in co-citation.Revealing these relationships reveals words that appear in the same theme.This finding outlines scholarly clusters achieved through the co-occurrence of keywords analysis, which can show network relationships on the theme (Chabowski et al., 2013).Furthermore, cooccurrence analysis aims to lead to an organised framework for theme refinement and future research. Figure 10 presents the cartography analysis results from VOSviewer.In the co-occurrence analysis, we selected all keywords as the unit of analysis.We set the minimum number of citations at '3'.With the above analysis, we found five clusters represented by 'red', 'green', 'yellow', 'blue', and 'purple'.The most dominating networks are waqf, sustainability and Malaysia.As presented in figure 11, we also found that in 2018, the most important occurrence was the sustainability network with Malaysia and Islamic microfinance.The co-occurrence of sustainability keywords in 2019 developed towards Waqf, Islamic finance and the third sector.Developing sustainability keywords also led to waqf institutions, effectiveness, thematic analysis and cash waqf in 2020.Then it increasingly led to Islamic philanthropy and accountability in 2021.Meanwhile, the novelty of waqf keywords in 2021 is the connection with waqf institutions, intellectual capital, Islamic altruism, accountability and trust (see fig 12).Through co-occurrence analysis, we found five research streams obtained from the keyword clustering results.The first cluster found relates to the issue of accountability in Islamic Social Finance (ISF).Moreover, the issue of accountability is linked to ISF as a third-sector economy.In the second cluster, we grouped papers on the contemporary theme of sustainability in Islamic Microfinance.In the third cluster, we identify the role of intellectual capital in waqf institutions.The fourth cluster focuses on management effectiveness in Malaysia.Finally, in the fifth cluster, we cluster the theme of performance measurement in waqf institutions.Based on the analysis above, we finalised five groups in 84 papers related to the sustainability of waqf institutions as non-profit organisations.A more in-depth discussion will be presented in the research streams section.accountability in ISF institutions, especially waqf, is currently in demand, among other topics.Among others, accountability is among the top ten keywords of interest in waqf, which indicates accountability problems in waqf management (Sukmana, 2020;Alshater et al., 2022).Accountability is the foundation of endowment and religion-based institutions, as it impacts an institution's trust and sustainability (Agyemang et al., 2017;Yasmin & Ghafran, 2019).Waqf institutions are endowment-and religion-based institutions, part of the third-sector economy that plays a vital role in Islamic culture and teachings (Arshad et al., 2016).Therefore, waqf, like other third-sector organisations (TSOs), must be formally recognised to build public trust. Abdullahi, SI (2022) illustrates that waqf institutions have financial problems running their activities.The cause of these circumstances is due to poor performance, historical neglect and the colonial past.Kamaruddin et al. (2022) reported the case of Malaysia, where waqf reporting practices are weak due to the absence of standardised waqf reporting, lack of awareness of mutawalli (waqf managers) to report, limited reporting channels from state authorities to national authorities, diverse governance structures, and the unwillingness of mutawalli to disclose the performance of waqf institutions.The influencing factors are leadership, institutional culture, and politics as push factors.In addition, the limitations of qualified personnel and sustainability issues ultimately impact the visibility of the waqf report. Consequently, there is a need for efforts to rebuild the impression of waqf institutions.This suggestion is because the trust of funders and public support for the sustainability of waqf institutions depend on the impression given.Furthermore, the given impression relates to how institutions showcase their effectiveness of accountability (Yang & Northcott, 2019).Without accountability, waqf institutions, as voluntary-based institutions operating through the concept of trust, will not be able to successfully manage and develop waqf assets as public goods. Waqf institutions need sustainability to mobilise resources.For that reason, social development can be achieved with the important role of waqf in public.The sustainability of waqf institutions requires accountability which depends on the mutawalli's policy to implement good governance.Therefore, mutawalli accountability is a fundamental issue as it is intrinsically linked to the sustainability and survival of the waqf institution.As a waqf manager, mutawalli accountability is expected to be considered for the trust of wakif (funders) and hence their continued donations (Hairul-Suhaimi et al., 2018). In Islamic teachings, accountability, governance and sustainability are central to the management of Islamic Social Finance because ISF has a strategy to support the current economy through religious and social sides (Awaludin et al., 2018).Thus, governance in ISF institutions, especially waqf, still has the potential to be improved.Quantitative measurement through the PLS-SEM technique has also been conducted by Hasan et al. (2022) to prove that the management ability of mutawalli has a positive impact on accountability.As a result, trust in waqf management also increases.Therefore, it is in line with implementing accountability through formal reporting to increase waqf trust in institutions.Waqf institutions need to conduct operational arrangements and performance measurements to provide measurable indications and indirectly show the success of the institution's management. Thus, the criticism of the review in this research agenda is that the different implementation of accountability in each waqf institution has not yet found a middle ground on how waqf institutions should carry out the standard of accountability.This situation raises an intelligent discussion about the key dimensions waqf institutions should have in disclosing their accountability.The key dimension should be based on the accountability relationship with the sharia perspective among waqifs to raise the standard for waqf institutions. Research Agenda 2: The Sustainability of Islamic Microfinance The second research agenda describes the increasing demand for issues on the sustainability of Islamic microfinance.We identified several papers that influence the sustainability of Islamic Microfinance.Waqf institutions are related to Islamic Microfinance as most authors focus on the importance of waqf in poverty alleviation.For example, Abdullah & Ismail (2017) explain that waqf management with good governance principles is a funding source for Islamic Microfinance Institutions (MFIs).It means that the sustainability of waqf management with good governance principles will ensure the sustainability of MFIs.Under the nature of permanence, irrevocability and perpetuity of waqf, the waqif is not allowed to get any monetary benefits; accordingly, waqf-based MFIs can provide capital with low returns to poor entrepreneurs.Poor entrepreneurs also do not require collateral to obtain the loan. Building on the concept of Abdullah & Ismail (2017), Ascarya & Masrifah (2022) implemented a cash waqf system in Baitul Maal wat Tamwil (BMT) in Indonesia.Baitul Maal wat Tamwil is one form of MFI that aims to optimise social and commercial activities for poor entrepreneurs so that they get welfare impacts.However, financial problems still need to be solved for small businesses in Indonesia.Hence BMTs can help poor entrepreneurs through cash waqf (Fauziah NN, 2021).The irrevocable nature of waqf does not allow waqf to be used arbitrarily.Ascarya & Masrifah (2022) reminded the most critical policies in waqf management for Islamic Microfinance are the shiddiq, Amanah, and professional traits that the mutawalli must possess.All of these traits need to be incorporated in the recruitment of BMT employees and members, the creation of standard operating procedures, standard operating management, and the information technology system used for administration.The formation of the above essential components will ensure the sustainability of Islamic Microfinance formed with cash waqf. The concept of waqf management through Islamic microfinance is limited to small entrepreneurs and can provide financing for cultural heritage maintenance in Palestine (Assi E, 2008).Islamic microfinance funded by waqf can create a third sector related to charity and be adapted for cultural heritage management.Wellmaintained cultural heritage sites can be used as tourist sites, so the sustainability of waqf funding in Islamic microfinance through cultural heritage maintenance can be done.In financing cultural heritage maintenance, waqf funds for Islamic Microfinance can also help renovate houses damaged by war in the Phillippines (Bayram & Altarturi, 2020).Waqf, managed through Islamic microfinance, will be used as a soft loan without interest so that refugees can get funds to renovate their homes. The management of waqf funds for social activities, as above, requires Islamic financial planning to achieve the realisation of Islamic microfinance sustainability.Mutawalli needs the help of financial managers who can calculate and mitigate risks so that the waqf funds given to the community do not run out (Billah & Saiti, 2017).A third party for waqf financial management is also needed to offer prospective waqifs about projects that waqf institutions will carry out through Islamic Microfinance schemes (Hapsari et al., 2022).Financial planning can help mutawalli to be responsible for the sustainable development of waqf, significantly cash and property waqf.Mutawalli is responsible for the productivity of waqf so that it can generate value from waqf property or cash.It proves that waqf in the form of productive cash and property can provide income that has a sustainable impact on socioeconomic development (Hassana et al., 2020). Furthermore, we find limitations in the research stream on the Islamic ecosystem that must be built to realise the sustainability of Islamic microfinance.The required Islamic ecosystem is the participation of waqf institutions, crowdfunding, Baitul maal wat Tamwil (BMT), zakat institutions and Islamic banks.However, we still have not found the participation ability of the institutions involved in the sustainability impact of Islamic microfinance.So, what can be further researched is the regulations suitable for waqf institutions to develop Islamic microfinance.Then, the challenges faced in building integration between waqf institutions and Islamic microfinance also need to be explored more deeply. Research Agenda 3: The Role of Intellectual Capital on Waqf Institution We have discussed the role of intellectual capital in waqf institutions.Waqf management, which requires accountability, governance and sustainability, can only be achieved with human resource management.The human resource management process includes recruitment, selection, performance appraisal, training and development, and compensation.These processes are implemented to achieve the goal of good governance of waqf institutions (Hasan et al., 2019).Good governance can be achieved if human resources' effectiveness plays a significant role in the management of waqf institutions (Sharip et al., 2016).Human resources are related to intellectual capital (IC) for implementing waqf institution operations.The assessment of intellectual capital (IC) can help waqf institutions achieve effectiveness and sustainability of performance (Laallam et al., 2020). IC is an intangible asset expected to improve worker performance and job satisfaction.Increasing worker performance and job satisfaction will affect organisational performance (Muwardi et al., 2020).On top of that, IC can be maximised by implementing knowledge discussions between workers and experts.Until 2020, IC is still a conceptual analysis, so further empirical testing related to IC and the performance of endowment institutions is needed.However, Laallam et al. (2022) empirically tested the relationship between IC and organisational performance in Algeria.The results of this test explain that human capital, structural capital, and spiritual capital have a positive relationship with organisational performance.However, relational capital, social capital and technological capital still do not show a positive relationship with the performance of waqf institutions in Algeria.Furthermore, research on IC still needs to be conducted in countries that are demographically similar and different from Algeria. Research Agenda 4: The Effectiveness of Waqf Management in Malaysia For the fourth research stream, we identified an issue with effective management in Malaysia.This fourth research agenda is different from the other research agendas.This issue concerns the widespread issue of waqf management.Chowdhury et al. (2011) examined the cash waqf management system in Malaysia and evaluated the factors that affect the performance of the cash waqf management system.The research conducted by Chowdhury et al. (2011) wanted to improve the current institutional arrangements and enhance network relationships across Malaysia to improve performance towards efficient and need-based dynamic management.Thus, waqf institutions can formulate various innovations and development of waqf management systems following Islamic sharia.Sharip et al. (2022) used the motivating language (ML) of the leaders of waqf institutions in Malaysia to achieve this.However, Sharip et al. (2022) failed to deliver the research findings as their findings cannot be considered a comprehensive solution to improve management effectiveness in waqf institutions.It is due to the fact that the focus of the analysis is only on the communication aspect of the leaders, so it cannot be generalised to other contexts.Thus, the problems in waqf management, such as inefficiency, have yet to be solved (Sapuan & Zeni, 2021).Finally, waqf institutions need to reform waqf management in order to waqf institutions have sustainability.Using a quantitative approach, Sapuan & Zeni (2021) found that the following factors: strengthening policies and legislation, increasing human capital capacity and capability, intensifying financial assistance programmes for entrepreneurs, strengthening infrastructure, and effective governance affect the long-term sustainability of waqf institutions. The factors mentioned above affect the management effectiveness of waqf institutions in Malaysia, but the management of waqf institutions, especially in terms of administration, requires new innovative ways.A hybrid model can be an example to generate more benefits for all parties involved (Chowdhury et al., 2012).Sulaiman & Zakari (2019) conducted empirical research to measure management effectiveness through financial ratios.They set seven waqf institutions as the analysis unit, but only one institution was financially sustainable.The components used to measure financial ratios are equity balance, revenue concentration, administrative cost and operating margin ratios.Furthermore, Yakob et al. (2022) conducted a study to measure enterprise risk management (ERM) in waqf institutions.The finding is that waqf institutions have a less-thanoptimal implementation of ERM, so its aspects need to be improved over time.Its finding supports the research of Pyeman et al. (2016), who measured the efficiency of all waqf institutions in Malaysia and found that only the state of Penang has highly efficient management. Research Agenda 5: Performance Measurement of Waqf Institution and non-profit institutions. In the last research agenda, we find contemporary issues in waqf literature.The fifth research agenda relates to performance measurement in waqf institutions.Our finding shows that this is the least topic has been conducted in relation to waqf.Waqf institutions are categorised as non-profit institutions due to the permanence, irrevocability, and perpetuity nature of waqf.Som & Nam (2009) clearly show that organisational learning is required to perform performance measurement in non-profit institutions.Organisational learning has a strong relationship with organisational performance.Performance measurement in non-profit organisations is about financial and social measurements (Alfirević et al., 2014).Effective performance measurement is crucial in promoting good governance and ethical management in waqf institutions.Thus, Noordin et al. (2017) developed a framework for assessing the performance of waqf institutions and outlined eight important stages that can serve as guidelines for waqf institutions in designing their performance measurement. The conceptual framework established by Noordin et al. (2017) was refined by Arshad et al. (2018), who explored the performance measurement indicators of waqf institutions based on non-profit organisations.These indicators are built from a maqasid sharia perspective.This research results in a performance measurement model in waqf institutions with maqashid sharia-based; consequently, waqf institutions can adapt in assessing their performance and fulfilling accountability for waqif.Although waqf institutions facilitate socio-economic growth for society, implementing waqf management as a prominent organisation remains a challenge.Ramli et al. (2018) explain that to overcome problems such as underdeveloped waqf property, the unproductive nature of assets, the inability to generate their own income, and inadequate documentation systems, it is necessary to create performance measurements that can provide evaluation for waqf institutions.This finding is in line with the research of Masruki et al. (2019) and Bernal-Torres et al. (2021), who developed a measurement of the socio-economic impact of waqf institutions.The identification conducted by Masruki et al. (2019) includes input, output, outcome, impact, effectiveness, and efficiency.Meanwhile, Bernal-Torres et al. (2021) innovate the performance measurement of non-profit organisations so that they can be immediately connected to the social problems that become the institution's mission.Widiastuti et al. (2021) contributed to the financial measurement model to assess the performance of the mutawalli.This model evaluates the mutawalli in managing waqf assets based on laws and regulations, the objectives of the institution, and the requests of the wakif and mauquf alaih (beneficiaries).Performance measurement is organised into three groups: performance, impact, and results.Each group has ratios with several segments for several groups.This research expects to contribute to objective and informative financial performance measurement for the decision-making process in managing waqf assets.Implementing performance measurement is not easy, considering there must be a cultural transition to organisational performance if waqf institutions are to conduct performance assessments (Jiao et al., 2022).However, if changes in organisational culture and performance can lead to improved performance of waqf institutions, it is worth implementing. CONCLUSION As a result of increasing research contributions on the sustainability of waqf institutions as non-profit organisations, waqf is receiving attention from around the world.This paper aims to present a systematic review of the sustainability literature of waqf institutions using bibliometric analysis.The purpose of this research results in two outcomes.First, we identify and discuss bibliometric results on waqf institutional sustainability literature.We used VoSViewer to generate bibliometric reviews on 84 documents in the Web of Science (WoS) and Scopus databases.An important result of this process is that the countries with the most research in this area are Malaysia and Indonesia.With co-citation analysis, we found seven clusters that waqf-themed papers have close relationships with various themes.In this case, waqf is related to sustainability, governance, Islamic Social Finance, and performance measurement.At the same time, the sustainability theme has a relationship with cash waqf, intellectual capital, Malaysia and accountability.Noordin and Mohd Thas Thaker are the most influential authors.Second, this paper delivers five significant themes on the sustainability of waqf institutions as nonprofit organisations.These themes are 1) The Accountability of Islamic Social Finance as Third Sector Economy, 2) The Sustainability of Islamic Microfinance, 3) The Role of Intellectual Capital in Waqf Institution, 4) The Effectiveness of Management in Malaysia, and 5) Performance Measurement of Waqf Institution.The themes above can be future research recommendations for other scholars.However, there is a limitation on these papers due to the lack of article numbers that discuss.Therefore, this paper is having to be improve by adding more articles to retrieve and filter using others papers search engine. Figure 1 Figure 1Research Flowchart Figure 3Most Relevant Authors Figure 4 Figure 4Publications by Country Figure 5Author Keywords Trend . Research on the theme of sustainability of waqf institutions as non-profit organisations was conducted by Brunei Darussalam in 2017.Between 2018 and 2019, Malaysia, Morocco, and Oman conducted research on the same theme.Indonesia and Turkey conducted further research in 2020-2021. Figure 7Co-Authorship by Countries Figure 8 Figure 8Co-Authorship byCountries, Year-to-Year Trend shows in detail that there are four clusters of researchers.The first cluster was started by Noordin NH's document in 2018, cited by Kassim S and Hassan R in the first semester of 2019 and finally cited by Engku Ali E.R.A in the second semester of 2019.Apart from citing Noordin NH, Engku Ali E.R.A also cited documents belonging to Kassim S and Hassan R. The second cluster is inhabited by Darus F, who cited Ramli A and Yusoff H in 2019.The third cluster was Abdullah R's documents in 2018, and the fourth cluster was Masruki R's documents in the second semester of 2019. Table 1 Search Queries and the Number of Papers Table 2 Literature Database Classification Table 3 Summary of The Review Authors of single-authored documents 17Authors of multi-authored documents 67 Table 4 Authors Impact Table 5 Top Ten Cited World Documents Table 6 Top Five Authors' Countries of Origin
8,845.2
2024-02-29T00:00:00.000
[ "Environmental Science", "Economics" ]
Effect of Different Downward Loads on Canal Centering Ability, Vertical Force, and Torque Generation during Nickel–Titanium Rotary Instrumentation This study aimed to examine how downward loads influence the torque/force and shaping outcome of ProTaper NEXT (PTN) rotary instrumentation. PTN X1, X2, and X3 were used to prepare J-shaped resin canals employing a load-controlled automated instrumentation and torque/force measuring device. Depending on the torque values, the handpiece was programmed to move as follows: up and down; downward at a preset downward load of 1 N, 2 N or 3 N (Group 1N, 2N, and 3N, respectively; each n = 10); or upward. The torque/force values and instrumentation time were recorded, and the canal centering ratio was calculated. The results were analyzed using a two-way or one-way analysis of variance and the Tukey test (α = 0.05). At the apex level, Group 3N exhibited the least canal deviation among the three groups (p < 0.05). The downward force was Group 3N > Group 2N > Group 1N (p < 0.05). The upward force, representing the screw-in force, was Group 3N > Group 1N (p < 0.05). The total instrumentation time was Group 1N > Group 3N (p < 0.05). In conclusion, increasing the downward load during PTN rotary instrumentation improved the canal centering ability, reduced the instrumentation time, and increased the upward force. Introduction Root canal instrumentation that facilitates effective disinfection is a key objective of root canal therapy [1]. However, curved and constricted root canals pose the risk of creating iatrogenic aberrancies, such as ledges, apical canal deviations, and canal wall perforations, which may jeopardize the outcome of root canal therapy [2,3]. Nickel-titanium (NiTi) engine-driven instruments have become a widespread use since they are more flexible [4], maintain the canal curvature better [5], and offer a more favorable treatment outcome [6] than stainless steel hand instruments. However, the unexpected intracanal separation of rotating instruments is still a major concern in NiTi rotary instrumentation [7,8]. Proper manipulation of NiTi rotary instruments is important to preventing iatrogenic errors [8]. Several studies have focused on how the dynamics of the use of NiTi instruments influence the risk of intracanal instrument separation, and have identified factors that reduce the torque and/or force generation, including a shorter pecking depth [9], reciprocating motion (versus continuous rotation) [10], higher rotational speed [11,12], and higher pecking speed [13]. These factors may contribute to the reduction in stress accumulated in the rotating instruments, leading to a reduced risk of intracanal instrument separation. The downward load applied to NiTi rotating instruments may be a factor that impacts stress generation [9]. Although gentle apical pressure is widely recommended [8], the magnitude of the downward load may largely depend on several operational factors related to the clinician's handling behavior; thus, there is a lack of objective standardization [14]. Limited information is available on how the downward load applied to NiTi rotary instruments affects their shaping ability, stress generation, and shaping efficiency [14]. The aim of this study was to examine how different downward loads influence the canal centering ratio, torque/force development, and instrumentation time of Pro-Taper Next rotary instruments (PTN: Dentsply Sirona, Ballaigues, Switzerland), employing load-controlled automated root canal instrumentation. The null hypothesis was that the downward load did not influence the canal centering ratio, torque/force development, or instrumentation time when PTN was used for the instrumentation of curved root canals. Sample Size Estimation G*Power software (version 3.1.9.2, Heinrich Heine University, Düsseldorf, Germany) was employed at the effect size, α error and power of 1.4, 0.05 and 0.80, respectively, based on the data from preliminary experiments. The sample size was estimated as 10 in a group. Downward Load-Controlled Root Canal Instrumentation and Torque/Force Measurement A root canal instrumentation device comprising a low-speed, torque-controlled motor (J Morita, Kyoto, Japan) and a motor-driven testing stand (MX2-500N; Imada, Toyohashi, Japan) [13,15] was modified and used to control the magnitude of the downward load ( Figure 1). A custom-made handpiece holder was fixed to the mobile stage of the testing stand with an electromagnet. The handpiece holder was hung with a wire and balanced by weights that were hung on the opposite side of the handpiece via three pulleys. Materials 2021, 14, x FOR PEER REVIEW 2 of 10 accumulated in the rotating instruments, leading to a reduced risk of intracanal instrument separation. The downward load applied to NiTi rotating instruments may be a factor that impacts stress generation [9]. Although gentle apical pressure is widely recommended [8], the magnitude of the downward load may largely depend on several operational factors related to the clinician's handling behavior; thus, there is a lack of objective standardization [14]. Limited information is available on how the downward load applied to NiTi rotary instruments affects their shaping ability, stress generation, and shaping efficiency [14]. The aim of this study was to examine how different downward loads influence the canal centering ratio, torque/force development, and instrumentation time of ProTaper Next rotary instruments (PTN: Dentsply Sirona, Ballaigues, Switzerland), employing load-controlled automated root canal instrumentation. The null hypothesis was that the downward load did not influence the canal centering ratio, torque/force development, or instrumentation time when PTN was used for the instrumentation of curved root canals. Sample Size Estimation G*Power software (version 3.1.9.2, Heinrich Heine University, Düsseldorf, Germany) was employed at the effect size, α error and power of 1.4, 0.05 and 0.80, respectively, based on the data from preliminary experiments. The sample size was estimated as 10 in a group. Downward Load-Controlled Root Canal Instrumentation and Torque/Force Measurement A root canal instrumentation device comprising a low-speed, torque-controlled motor (J Morita, Kyoto, Japan) and a motor-driven testing stand (MX2-500N; Imada, Toyohashi, Japan) [13,15] was modified and used to control the magnitude of the downward load ( Figure 1). A custom-made handpiece holder was fixed to the mobile stage of the testing stand with an electromagnet. The handpiece holder was hung with a wire and balanced by weights that were hung on the opposite side of the handpiece via three pulleys. When the electromagnet was turned On, the handpiece and stage were programmed to move together at a preset speed of 50 mm/min [13]. When the electromagnet was turned When the electromagnet was turned On, the handpiece and stage were programmed to move together at a preset speed of 50 mm/min [13]. When the electromagnet was turned Off, the handpiece was released from the stage and fell freely with a downward load controlled by the weights. To calibrate the downward load to 1 N, 2 N, and 3 N, vertical force values were measured when the electromagnet was turned Off and the head of the handpiece without a NiTi instrument directly touching the top of the canal model attached to the torque/force measuring unit. The handpiece was programmed to make three types of movements depending on clockwise torque values detected by the motor as follows. Movement 1: When the torque value was less than 0.2 N·cm, the electromagnet was programmed to be On, and the handpiece and stage made a downward movement for 2 s and an upward movement for 1 s at a speed of 50 mm/min [13]. The instrumentation always started with this movement. Movement 2: When the torque value was between 0.2 N·cm and 2.5 N·cm, the electromagnet was programmed to be Off, and the handpiece fell freely with a preset downward load (1 N, 2 N, or 3 N). Movement 3: When the torque value was more than 2.5 N·cm, the electromagnet was programmed to be On, and the handpiece and moving stage moved up together for 3 s at 50 mm/min. After moving up, the handpiece made one of the three types of movements depending on the torque measured by the motor. A resin block having a J-shaped simulated canal (size #15, 0.02 taper, 45 • curvature, 17 mm length; Endo Training Bloc, Dentsply Sirona) was fixed on a metal stage linked to the torque/force measuring unit with a metal rod. The measuring unit consisted of strain gauges (KFG-2-120-D31-11, Kyowa Electronic Instruments, Tokyo, Japan) and a load cell (LUX-B-ID; Kyowa Electronic Instruments), which were used for measuring the torque and force, respectively [13,15]. The output signals were amplified using an amplifier (PCD-400A, Kyowa Electronic Instruments) and transferred to a computer with data recording software (DCS-100A; Kyowa Electronic Instruments). Root Canal Instrumentation The resin blocks (n = 30) were instrumented with the full working length set to the apex. The ProTaper Gold SX instrument (Dentsply Sirona) was first used to flare the canal to 5 mm from the apex, and the ProGlider instrument (Dentsply Sirona) attached to the automated instrumentation device was used to establish a glide path to the apex. The resin blocks were then assigned randomly into Groups 1N, 2N, and 3N (each n = 10), in which the downward load was set at 1 N, 2 N, and 3 N, respectively. Each canal was instrumented with the PTN using the downward load-controlled automated root canal instrumentation device. X1 (size 17/0.04 taper at the tip area), X2 (size 25/0.06 taper at the tip area), and X3 (size 30/0.07 taper at the tip area) instruments were sequentially used. In each instrument, the instrumentation had two steps, i.e., to 1 mm short of the apex and then to the apex. A lubricating paste (RC Prep, Premier, Plymouth Meeting, PA, USA) was used during instrumentation. Following each use of the instrument, canal irrigation with 1 mL distilled water followed by patency verification using a size 10 K-file was performed. PTN instruments were used in one canal. During the PTN rotary instrumentation, the upward and downward force values and clockwise torque values were recorded, and the maximum values developed in each instrument were determined. Evaluation of the Canal Centering Ratio and Instrumentation Time Image analyzing software (Photoshop 7.0, Adobe Systems, San Jose, CA, USA) was used to determine the canal centering ratio, as described previously [13,16]. Briefly, superimposed pre-and post-operative digital images were created (Figure 2), and the amount of material removed from the outer and inner canal wall was measured at five measuring levels (0, 0.5, 1, 2 and 3 mm from the apical terminus). The canal centering ratio was determined with the formula: (X − Y)/Z where: X = amount of material removed from the outer wall Y = amount of material removed from the inner wall Z = post-operative diameter of the canal. where: X = amount of material removed from the outer wall Y = amount of material removed from the inner wall Z = post-operative diameter of the canal. The instrumentation time was defined as the time elapsed from the time point at which the torque value first exceeded 0.2 N·cm to the end of instrumentation. The time was calculated from the raw data of torque values acquired from the torque/force measuring unit. Statistical Analysis The normality and the homogeneity of the variance of the data were confirmed with the Shapiro-Wilk test and the Levene's test, respectively. The canal centering ratio, the vertical force and clockwise torque values, and the instrumentation time were analyzed with a two-way analysis of variance followed by the Tukey test. A one-way analysis of variance and the Tukey test were used to analyze the total instrumentation time. The p value was considered to be significant at 5%. Results No instrument separation, distortion, or ledge formation was reported in any of the groups. Canal Centering Ratio The mean values of the canal centering ratio with three different downward load values are shown in Figure 3. At 0 mm from the apex, Group 3N showed the lowest centering ratio among all the groups (i.e., least deviation, p < 0.05). In Groups 1N and 2N, the absolute value at 0 mm was significantly greater than the values at all other measuring points (p < 0.05). The instrumentation time was defined as the time elapsed from the time point at which the torque value first exceeded 0.2 N·cm to the end of instrumentation. The time was calculated from the raw data of torque values acquired from the torque/force measuring unit. Statistical Analysis The normality and the homogeneity of the variance of the data were confirmed with the Shapiro-Wilk test and the Levene's test, respectively. The canal centering ratio, the vertical force and clockwise torque values, and the instrumentation time were analyzed with a two-way analysis of variance followed by the Tukey test. A one-way analysis of variance and the Tukey test were used to analyze the total instrumentation time. The p value was considered to be significant at 5%. Results No instrument separation, distortion, or ledge formation was reported in any of the groups. Canal Centering Ratio The mean values of the canal centering ratio with three different downward load values are shown in Figure 3. At 0 mm from the apex, Group 3N showed the lowest centering ratio among all the groups (i.e., least deviation, p < 0.05). In Groups 1N and 2N, the absolute value at 0 mm was significantly greater than the values at all other measuring points (p < 0.05). Vertical Force and Clockwise Torque The mean maximum vertical force values and clockwise torque values developed during X1, X2 and X3 instrumentation with three downward loads are shown in Figure 4. Regarding the downward vertical force generated by the three instruments, Group 3N showed the largest force, followed by Groups 2N and 1N (p < 0.05). Regarding the upward vertical force, Group 3N recorded a significantly larger force than Group 1N for the X2instrumentation (p < 0.05). The clockwise torque values showed no significant difference among the three instruments (p > 0.05). Materials 2021, 14, x FOR PEER REVIEW 5 of 10 Vertical Force and Clockwise Torque The mean maximum vertical force values and clockwise torque values developed during X1, X2 and X3 instrumentation with three downward loads are shown in Figure 4. Regarding the downward vertical force generated by the three instruments, Group 3N showed the largest force, followed by Groups 2N and 1N (p < 0.05). Regarding the upward vertical force, Group 3N recorded a significantly larger force than Group 1N for the X2instrumentation (p < 0.05). The clockwise torque values showed no significant difference among the three instruments (p > 0.05). Vertical Force and Clockwise Torque The mean maximum vertical force values and clockwise torque values developed during X1, X2 and X3 instrumentation with three downward loads are shown in Figure 4. Regarding the downward vertical force generated by the three instruments, Group 3N showed the largest force, followed by Groups 2N and 1N (p < 0.05). Regarding the upward vertical force, Group 3N recorded a significantly larger force than Group 1N for the X2instrumentation (p < 0.05). The clockwise torque values showed no significant difference among the three instruments (p > 0.05). With the same instrument, different capital letters indicate significant differences (p < 0.05). With the same downward load, different small letters indicate significant differences (p < 0.05). Figure 5 shows the time required for total instrumentation and for each of the three instruments. The total instrumentation time was significantly shorter in Group 3N than Group 1N (p < 0.05). Group 3N recorded the shortest time for X3 instrumentation out of all the groups (p < 0.05). X2 exhibited the longest time among the three instruments in all groups (p < 0.05). Figure 5 shows the time required for total instrumentation and for each of the three instruments. The total instrumentation time was significantly shorter in Group 3N than Group 1N (p < 0.05). Group 3N recorded the shortest time for X3 instrumentation out of all the groups (p < 0.05). X2 exhibited the longest time among the three instruments in all groups (p < 0.05). Discussion The dynamic torque and force characteristics of NiTi instruments during rotary root canal instrumentation have been investigated in numerous studies to assess the impact of various factors on the stress developed within the rotary instruments and the canal wall [14,17]. Thus, in addition to instrument-related factors, such as configuration and metallurgy [18], several operational factors that influence the stress development and shaping performance of NiTi rotary instrumentation have been identified [9][10][11][12][13]. Such factors that reduce torque and force generation and apical canal transportation include reciprocating motion [10] and a faster rotational speed [11,12]. Regarding the impact of factors related to the clinician's handling behavior, a shorter pecking depth decreases the screw-in force [9]. Additionally, faster pecking speeds produce less apical canal transportation and more torque and apical force [13]. Few studies are available concerning the influence of the downward load, another important handling-related factor, on the stress development and shaping performance of NiTi rotary instrumentation. The current findings demonstrated that a larger downward load reduced the instrumentation time and degree of transportation, while a larger load produced a larger upward vertical force, which represented the screw-in force [19]. Thus, the null hypothesis was rejected. Automated root canal instrumentation was previously employed to study the torque and force development and canal shaping performance of NiTi rotary instruments under Discussion The dynamic torque and force characteristics of NiTi instruments during rotary root canal instrumentation have been investigated in numerous studies to assess the impact of various factors on the stress developed within the rotary instruments and the canal wall [14,17]. Thus, in addition to instrument-related factors, such as configuration and metallurgy [18], several operational factors that influence the stress development and shaping performance of NiTi rotary instrumentation have been identified [9][10][11][12][13]. Such factors that reduce torque and force generation and apical canal transportation include reciprocating motion [10] and a faster rotational speed [11,12]. Regarding the impact of factors related to the clinician's handling behavior, a shorter pecking depth decreases the screw-in force [9]. Additionally, faster pecking speeds produce less apical canal transportation and more torque and apical force [13]. Few studies are available concerning the influence of the downward load, another important handling-related factor, on the stress development and shaping performance of NiTi rotary instrumentation. The current findings demonstrated that a larger downward load reduced the instrumentation time and degree of transportation, while a larger load produced a larger upward vertical force, which represented the screw-in force [19]. Thus, the null hypothesis was rejected. Automated root canal instrumentation was previously employed to study the torque and force development and canal shaping performance of NiTi rotary instruments under strictly controlled laboratory conditions by excluding the influence of operator bias inherent in hand motion [20]. This study used a load-controlled automated root canal instrumentation device that was designed to apply pre-set downward loads to the rotating instrument when the torque value did not exceed the predetermined torque limit value. The axial movement was configured to simulate the motion that is applied clinically during NiTi rotary instrumentation: a pecking motion as the primary up and down motion and a withdrawing motion when an instrument meets resistance. The downward load values of 1 N, 2 N, and 3 N were determined from a preliminary study, where downward loads created by experienced operators during NiTi rotary instrumentation were monitored using the torque/force measuring unit, and average downward loads of 1.5-2 N were obtained. The load-controlled automated instrumentation excludes the influence of operator bias and may greatly contribute to determining the effect of downward load values on the preparation outcome under highly standardized conditions. However, the movement of the automated device may not fully reproduce an actual clinical hand motion, in which downward loads may be more variable [21]. Thus, care should be taken in extrapolating the present findings to a clinical situation. Simulated resin canals were used in this study to standardize the analysis by excluding anatomical variables and variations in the canal wall hardness inherent in natural teeth. Thus, various studies have employed resin canals instead of natural teeth to investigate torque/force generation [9,11,13,15,20] and apical canal transportation [22][23][24] during NiTi rotary root canal instrumentation. However, less force may be required to cut resin canals as they are softer with a smoother surface texture than dentin [25]. Additionally, the resin chips are larger than natural dentin chips, causing a more frequent canal blockage, particularly in the apical area [25]. These limitations may be compensated by using extracted teeth following the anatomical matching of the canals with micro-computed tomography [26]. The downward vertical force was significantly larger with an increasing pre-set downward load, indicating that the load application was conducted in a well-controlled manner. No significant difference in the clockwise torque was noted in all groups for each instrument size. The torque-limit setting and programmed movement of the handpiece, including the pecking speed [13], were appropriately adjusted so that the instruments required similar torque values during the shaping procedure irrespective of the different applied downward loads [14]. Thus, under the present conditions, the difference in the downward load can be assumed to be a factor closely associated with the observed differences among groups rather than the resulting clockwise torque generation. PTN instruments were employed as a representative single-length rotary multiplefile system in which an improved cyclic fatigue resistance [27] and reduced tendency for apical transportation [28] were demonstrated compared with those of several other NiTi instruments. Such properties of the PTN instruments may be attributed to the use of a heat-treated M-Wire alloy [29] and its variable-tapered blade with an off-centered rectangular cross-section [27]. The peak vertical force generated by PTN is smaller than that by ProTaper Universal (Dentsply Sirona; manufactured from the conventional NiTi alloy); however, it is larger than that by Twisted File Adaptive (SybronEndo, Orange, CA, USA; manufactured from a heat-treated R-phase alloy) [30]. PTN develops greater torque generation than the ProTaper Universal [31] and Twisted File Adaptive [32]. These findings are important because several instrument-related factors, including the metallurgy and configuration of the NiTi instruments, influence the torque and force generation during instrumentation [14,17]. Thus, whether different NiTi rotary systems would perform differently under the present experimental conditions deserves further study. The post-instrumentation root canals in all groups showed apical canal transportation to the outer wall, potentially because of the ability of NiTi instruments to recover their original linear shape during the instrumentation of the canal curvature [33]. Among operational and handling-related factors, the reciprocating motion (vs. continuous rotation) [10] and faster pecking speed [13] contribute to reducing the degree of apical transportation. In this study, Group 3N showed the least deviation 0 mm from the apex, which is attributed to the shorter instrumentation time of Group 3N. A shorter instrumentation time improves the canal centering ability, likely because of the reduced contact time of the blades with the canal wall, resulting in the removal of less resin from the outer canal wall [13]. These findings indicate that a larger downward load is associated with a larger screwin force, which is represented by the upward vertical force acting on a canal model [15,19]. The screw-in force may induce instantaneous binding of a NiTi instrument, which exposes the instrument to a risk of breakage resulting from sudden and abrupt torsional stress [19]. The magnitude of the screw-in force varies depending on several factors including the taper, tip size, and pitch length of the instruments [9,19], heat treatment [16], kinematic motion (continuous vs. reciprocating motion) [15,34], and pecking depth [9]. The present findings suggest that a larger downward load should be considered a factor that augments the screwin force. Because the screw-in force is closely associated with instrument engagement to the root dentin, it is reasonable that the large downward load could lead to strong binding of an instrument to the canal wall. From a clinical perspective, knowledge on how the downward load influences the preparation behavior of NiTi rotary instruments is important to ensure the safe and efficient use of these instruments. The current study's findings indicate that the downward load should be regarded as a significant factor that may affect the screw-in force-induced stress generation. Increasing the downward load may be beneficial to improving the shaping ability and reducing the preparation time of PTN rotary instrumentation, as long as the torque generation is appropriately controlled and the operator has sufficient skills and experience to manage the screw-in force. Additional studies using different brands of NiTi rotatory instruments on anatomically-matched extracted human teeth and adopting an automated handpiece movement that better mimics actual hand motion will deepen our knowledge on the effect of different downward loads on the shaping behavior of NiTi rotatory instrument systems. Conclusions The study findings revealed that increasing the downward load during PTN rotary instrumentation improved the canal centering ability, reduced the instrumentation time, and increased the upward force without increasing the clockwise torque.
5,716.2
2022-04-01T00:00:00.000
[ "Materials Science" ]
A walk along the proton drip-line by β -decay spectroscopy . During the last decade we have carried out a systematic study of the β decay of neutron-deficient nuclei, providing rich spectroscopic information of importance for both nuclear structure and nuclear astro-physics. We present an overview of the most relevant achievements, including the discovery of a new exotic decay mode in the fp -shell, the β -delayed γ -proton decay in 56 Zn, the first observation of the 2 + isomer in 52 Co and the latest results on the heavier systems 60 Ge and 62 Ge. We also report on our deduced mass excesses in comparison with systematics and a recent measurement. Finally, we summarise our results on the half-lives of T z = -1 / 2, -1 and -2 neutron-deficient nuclides, and analyse their trend. Introduction The investigation of nuclei close to the limits of nuclear stability, exotic nuclei, is at the frontier of modern nuclear physics.Their study is challenging but it is becoming more possible thanks to new-generation facilities for the production and acceleration of radioactive ion beams (RIBs).Neutron-deficient nuclei can be populated even up to the boundaries of nuclear stability, the proton drip-line, thus enabling one to perform detailed decay studies. Decay-spectroscopy experiments provide us with structure information of paramount importance [1][2][3][4][5][6], such as the half-life of the unstable nucleus and the energies and branching ratios for β-delayed γ or particle emission, e.g.protons, α particles or neutrons, depending on whether the β-decaying nucleus is either neutrondeficient or neutron rich.This is important since it allows us to determine the energy levels populated in the daughter nucleus together with their β-feedings and thus reconstruct the partial decay scheme.From all this information one can finally determine the absolute values of the Fermi B(F) and Gamow-Teller B(GT) transition strengths and, in favourable cases, also deduce the mass excesses of the daughter and parent nuclei. If isospin were a good quantum number in nuclear physics then mirror nuclei, where the number of neutrons and protons are exchanged, would be identical.In reality their properties, such as the structure of the levels, quantum numbers, half-lives, β-decay strengths, etc., are very similar but not identical.There can be differences which break the ideal isospin symmetry.Charge-exchange (CE) reactions are the mirror strong interaction process of β decay [7,8].The comparison between β-decay data and CE reactions carried out on the stable mirror target allows us to investigate fundamental questions related to the role of isospin in atomic nuclei. Moreover, many exotic nuclei lie on the reaction pathways involved in various processes of nucleosynthesis, responsible for the production of the chemical elements in the Universe.Nuclear-structure properties such as halflives, masses and β-strengths are of great significance for nuclear astrophysics.One such nucleosynthesis processes is the rapid proton-capture, rp-process, happening in explosive stellar environments [9].It acts on the neutron-deficient side of the nuclear chart and produces many medium-mass/heavy proton-rich elements, passing through neutron-deficient nuclei in the fp-shell and above. During the last decade we have performed a systematic study of neutron-deficient nuclei along or close to the proton drip-line by β-decay spectroscopy experiments with implanted RIBs. Figure 1 shows the nuclei studied, spanning the fp-shell and above: the nuclei marked with blue/yellow/purple circle where produced at the GSI/GANIL/RIKEN laboratories, respectively, while a blue or red circumference line indicates a value of -1 or -2 for the third component of the isospin quantum number, T z .The textbox indicates the primary beam that was used to produce the secondary RIB of interest in the various laboratories, where in the case of GANIL a 58 Ni primary beam was used for the experiment focused on the production of the T z = -2, 56 Zn nucleus [1,2], while a 64 Zn beam was used for the experiment devoted to the study of the T z = -1, 58 Zn nucleus [4].The results from the GSI experiment are published in Ref. [10].Results from the GANIL experiments are published in Refs.[1][2][3][4][5].The most recent study of the β decay of 60 Ge and 62 Ge, performed at RIKEN, is reported in Ref. [6].Results on 64 Se and 66 Se will be published soon. In the following we focus especially on the GANIL and RIKEN experiments.The experimental setups and the results obtained are discussed in Sect. 2. Our results on the mass excesses are summarised in Sect.Table 1.Nuclei produced at GANIL and RIKEN: reference, isotope of interest, its T z value and the experimental results for the number of implants N imp and half-life T 1/2 .The last two columns specify the laboratory and primary beam used for their production. β-decay spectroscopy experiments and results As shown in figure 1, our β-decay spectroscopy experiments were performed worldwide at different RIB facili-ties, but conceptually they are the same.A primary beam is accelerated and fragmented on a thick target: for example beams of 58 Ni or 64 Zn are fragmented on a natural Ni target at GANIL, while 78 Kr is fragmented on a Be target at RIKEN.The fragments are then selected by the LISE3 separator [12] at GANIL, or by the BigRIPS separator [13] at RIKEN.Thereafter they are implanted into Double-Sided Silicon Strip Detectors (DSSSD), used to detect both the implanted heavy ions and any subsequent charged-particle decay: a 300-µm thick DSSSD at GANIL, or the WAS3ABi setup [14] comprising three 1-mm thick DSSSDs at RIKEN.Finally, arrays of highpurity Ge detectors surround the DSSSD setup to detect the β-delayed γ-rays: four EXOGAM clovers [15] at GANIL, or the EURICA array [16] consisting of 12 clusters at RIKEN.More details on the setups, the procedures employed for particle identification and data analysis are available in Refs.[2,6]. Table 1 shows the number of implanted ions (N imp ) together with their measured half-lives T 1/2 and details, such as the laboratory and primary beam, of their production.The unprecedented statistics available at the RIKEN Nishina Center allowed us to extend the systematic exploration of neutron-deficient nuclei to higher masses, along the proton drip-line. Several nuclei have been studied in our experimental campaign.We have measured β-delayed proton and γ emissions and the related branching ratios.Decay schemes and absolute B(F) and B(GT) strengths have been determined.For some of the cases the mass excesses have also been deduced.For some of the T z = −2 nuclides under study [1,2,5] it was possible to enrich the β-decay data by comparison with complementary ( 3 He,t) CE reactions performed on their stable mirror target at RCNP Osaka [8,17,18].Some common features emerge when looking at nuclei with the same T z value.The decay of the T z = −2 nuclei proceeds by both β-delayed proton emission and β-delayed γ de-excitation.An exotic feature that we have observed in all the T z = −2 systems studied ( 56 Zn, 48 Fe, 52 Ni and 60 Ge) is the competition between the γ de-excitation and the proton emission from the T = 2, 0 + isobaric analogue state (IAS) populated in the daughter nucleus [2,19].The β-delayed proton emission from the IAS is isospinforbidden, but it is observed and this is attributed to a T = 1 isospin impurity in the IAS wave function.In the cases of 48 Fe and 52 Ni the β-proton component constitutes only 14% and 25% of the respective IAS total decays.A reason for this is the relatively low energy of the protons emitted by the daughter nuclei, corresponding to calculated proton half-lives of the same order-of-magnitude as the γ-decay Weisskopf transition probabilities [2]. In the decay of 56 Zn (shown in figure 2), the β-proton component de-exciting the IAS is 44% of the total, i.e., much larger.The 56 Zn daughter, 56 Cu, has another 0 + state within 100 keV of the IAS and, since the mixing depends on how close the two states are, the strong isospin mixing of 33% favours proton decay [1].At this point one would expect that the much faster proton decay (t 1/2 ∼ 10 −18 s) from the 56 Cu IAS should dominate over the γ de-excitation (t 1/2 ∼ 10 −14 s in the mirror).However the competing γ decay from the IAS is still observed, being 56% of the total.The reason for such behaviour lies in the nuclear structure.Two independent shell model calculations [20,21] found that the proton decay of the T = 1 IAS component is hindered by a factor of 10 3 .Finally, in the case of 60 Ge the β-delayed proton emission from the IAS is estimated to be within 74.5% -95% of the total [6] which may again be due to nuclear structure. 56Zn, lying at the proton drip-line, has been one of the most intriguing cases since it presents many interesting and unusual features [1].As mentioned above, the IAS is divided between two mixed levels.These levels are both fed by the Fermi transition, so that the Fermi strength is shared between them.Moreover, in 56 Zn we have discovered a new exotic decay mode in the fp-shell: the βdelayed γ-proton decay.This exotic decay pattern is possible because in 56 Zn the β-delayed γ rays populate levels in the daughter nucleus which are located above the proton separation energy and hence are unbound.The levels then decay by proton emission to the ground state (g.s.) of 55 Ni.Therefore the sequence is β-delayed γ-proton decay and we have observed three such sequences.The comparison between the β decay of 56 Zn to 56 Cu [1] and the mirror CE process, the 56 Fe( 3 He,t) 56 Co reaction [17], shows a re- markable isospin symmetry: all the dominant transitions are observed in both cases and with very similar strengths.Such a comparison allowed us to clarify some aspects of the level structure in 56 Cu which would have remained unclear otherwise. 60Ge is another fascinating case.Its decay was almost unknown before the present experiment, where we obtained the first experimental information on both the βdelayed proton and γ emissions, reconstructing a complex decay scheme involving five different nuclei [6].The partial decay scheme showing the transitions populating energy levels in the daughter nucleus 60 Ga, which were unknown before, is reported in figure 3. 60 Ga lies right at the proton drip-line, thus its structural properties are of relevance for the rp-process [11]. = A common feature that we have found in the T z = -1 nuclei [6,10] is the suppression of isoscalar γ transitions between J π = 1 + , T = 0 states (Warburton and Weneser quasi-rule [22,23]).We have verified experimentally that it also holds in the heavier 62 Ge nucleus.In addition, we did not find evidence of enhanced low-lying Gamow-Teller strength in 62 Ga due to isoscalar proton-neutron pairing, confirming the findings of a previous measurement [24]. Finally, among the many results we emphasise the first observation of the 2 + isomer in 52 Co [3].There had been speculation that such an isomeric state exists but it was not observed earlier because, when one attempted to populate 52 Co directly, both the g.s. and isomeric state of very similar half-lives are produced and implanted together.We have succeeded disentangling the two decays, measuring for the first time the isomer half-life [102( 6) ms] and the g.s.half-life without contamination [112(3) ms].The trick was to look at the implantation of 52 Ni, whose decay process directly populates the 0 + , T = 2 IAS in 52 Co.The latter then de-excites by γ-ray emission populating in a selective way the 2 + isomeric state. Mass excesses The mass excesses of the β-decaying neutron-deficient nucleus and its daughter nucleus are important to determine key quantities such as the Q β value of the decay, needed to deduce the B(F) and B(GT) strengths.The proton separation energy S p in the β-daughter nucleus can be determined by knowing its mass excess and that of the β-proton daughter nucleus. There is a lack of mass measurements in this mass region.When the mass excesses are not known, the atomic mass evaluation (AME) systematics can be used.Another option is to determine the mass excess of the nucleus of interest from the Isobaric Multiplet Mass Equation (IMME) [25][26][27]: This can be done when at least three other members of the isospin multiplet are known.In equation 1, T z is the third component of the isospin T and α stands for all the other quantum numbers.From our β-decay data we have determined the g.s.mass excesses of the T z = -2 nuclei 48 Fe, 52 Ni and 56 Zn in Ref. [2] and 60 Ge in Ref. [6] from the IMME, knowing four members of each quintuplet.We have also deduced the g.s.mass excesses of 62 Ge and 60 Ga, and S p in 60 Ga [6].Our deduced mass excesses (red filled circles and diamonds) are shown in figure 4 and compared with the values obtained from the 2003 [28], 2012 [29] and 2020 [30] AME systematics.In the figure open and filled symbols represent the measured and deduced values in the AME systematics. We have observed since our first study of 56 Zn [1] that, for proton-rich nuclei in this region of the mass chart, the AMEs published subsequent to that in 2003 are in poorer agreement for the mass excess in comparison with our IMME or deduced values.After us, other authors reported similar issues [11,31].As shown in figure 4 Ref.[11] Figure 4. Mass excesses of 48 Fe, 52 Ni, 56 Zn, 60 Ge, 60 Ga and 62 Ge.The difference is shown between the values we obtain (red filled circles and diamond) [2,6] and values from the 2003, 2012 and 2020 AMEs [28][29][30].Open symbols represent values from systematics, while filled symbols are experimental or deduced values.The cyan filled triangle is the measurement from Ref. [11].The data points belonging to each nucleus are slightly displaced to show the error bars better. 48Fe and 52 Ni (black filled circles).The AME systematic evaluation does not include isobaric multiplets because the IAS might be mixed and thus its energy might deviate from the IMME formula.An IAS-mixed case is 56 Zn [1].We think that, with caution, this knowledge can help to calculate extrapolated values.New mass measurements in this region are important to better constrain the future AME. A recent measurement of the mass excess in 60 Ga is also shown in figure 4 (cyan filled triangle) [11], which is in excellent agreement with our indirect determination from the β-decay data [6].Paul et al. have also determined S p = 78 (30) keV in 60 Ga, in agreement with our value of 90 (15) keV [6].The 2020 AME systematic value is -340(200) keV [30].By combining the two experimental values, S p ( 60 Ga) = 88 (18) keV is obtained, establishing the proton-bound nature of 60 Ga.This value, together with the fact that 59 Ga was not observed in fragmentation reactions at NSCL, provides strong evidence that 60 Ga is the last proton-bound gallium isotope [11]. 59Ga was also not observed in our recent experiment at RIKEN, strengthening the conclusion that 60 Ga marks the location of the proton drip-line for Z = 31. Half-life trends In the present Section we summarise the results concerning the half-lives of the nuclei studied and analyse their trends.Figure 5 represents all the measured half-lives as a function of the atomic number Z and for the T z = -1/2, -1 and -2 nuclei.It is an updated version of figure 7 from Ref. [4], where we have included/updated the half-lives of the following nuclides: 56 Zn [1]; 48 Fe and 52 Ni [2]; 52 Co [3]; 60 Ge, 60 Ga, 62 Ge and 59 Zn [6] and 44 Cr [19]. Three curves are obtained, corresponding to the different T z values, because the B(F) value driving the decay is the same for all the nuclei with the same T z .As discussed in Ref. [4], the systematic decrease of the T 1/2 values with the mass, seen in the T z = -1/2 nuclei, reflects the increase in the Q β value.A similar decreasing pattern is observed in the half-lives of the T z = -1 nuclei and, on top of this behaviour, a typical odd-odd and even-even effect is found.The latter is due to the fact that in the odd-odd nuclei there exist other excited states below the IAS that receive a significant amount of β feeding, which makes their half-lives slightly shorter in comparison with their even-even neigh-bours [4].In the T z = -2 nuclei, which are all even-even, a smoother decreasing trend is observed.The half-life of the most exotic T z = -2 nucleus, 60 Ge, is only 25.0(3) ms. Conclusions We have given an overview of the most relevant achievements from our β-decay spectroscopy experiments at GANIL and RIKEN.Detailed spectroscopic information has been obtained for several neutron-deficient nuclei, starting from lighter to heavier systems along the proton drip-line.Half-lives, decay schemes, β-decay transition strengths, mass excesses have been determined, many of them for the first time.These results are relevant for both nuclear structure and nuclear astrophysics. Our deduced mass excesses have been compared with different AME systematics, indicating the need for more mass measurement in this region of the nuclear chart.The half-life values as a function of the mass number have been analysed, providing a comprehensive understanding of the half-life trends in terms of the Fermi strength and the Q β value. We have shown that valuable spectroscopic information can be obtained from this kind of experiment, thus improving our knowledge of the properties of neutrondeficient nuclei. 3 and compared with the mass evaluation systematics and a recent arXiv:2305.00330v1[nucl-ex] 29 Apr 2023 Figure 1 . Figure 1.Summary of the β-decay spectroscopy experiments carried out along the proton drip-line, in the fp-shell and above [1-6, 10].Neutron-deficient nuclei produced at the GSI/GANIL/RIKEN laboratories are marked by a blue/yellow/purple circle, where a blue or red circumference line indicates a value of T z = -1 or -2.The primary beams used in the different laboratories and their energies are reported in the textbox. Figure 2 . Figure 2. Partial decay scheme of 56 Zn (reprinted without changes from Ref. [5] under CC BY 3.0).Transitions corresponding to those observed in the mirror 56 Co nucleus are represented by dotted lines. , the values from the 2003 AME lie closer to our estimates than the values from the 2012 AME.The values from the 2016 AME[32] (not shown in the figure) have a very similar behaviour.The 2020 AME also behaves similarly for the unmeasured nuclei, but includes new measured values for
4,284
2023-04-29T00:00:00.000
[ "Physics" ]
Operating scheme for the light-emitting diode array of a volumetric display that exhibits multiple full-color dynamic images Abstract. We designed and developed a control circuit for a three-dimensional (3-D) light-emitting diode (LED) array to be used in volumetric displays exhibiting full-color dynamic 3-D images. The circuit was implemented on a field-programmable gate array; therefore, pulse-width modulation, which requires high-speed processing, could be operated in real time. We experimentally evaluated the developed system by measuring the luminance of an LED with varying input and confirmed that the system works appropriately. In addition, we demonstrated that the volumetric display exhibits different full-color dynamic two-dimensional images in two orthogonal directions. Each of the exhibited images could be obtained only from the prescribed viewpoint. Such directional characteristics of the system are beneficial for applications, including digital signage, security systems, art, and amusement. Introduction 2][3][4] Viewers can observe the 3-D images from any surrounding viewpoint without requiring additional devices.Therefore, volumetric displays could be applied for 3-D visualization in many fields. 6][7][8] As shown in Fig. 1(a), viewers can view different images depending on the viewpoints of the volumetric display designed by the algorithm.The potential uses of the system include digital signage, security systems, art, and amusement because it can provide different 2-D information to multiple people simultaneously. The volumetric display shown in Fig. 1(a) was made with a glass cube in which many small cracks were induced by a laser.The display represents three monochromatic static images.Moreover, in an earlier study, 5 we had provided a brief introduction to a volumetric display composed of 8 × 8 × 8 light-emitting diodes (LEDs) as an application of the proposed algorithm and shown an outline of a system exhibiting dynamic color images (alphabet and numbers) in two directions, as shown in Fig. 1(b).In this paper, we describe the hardware design of the volumetric display system in more detail. However, only eight colors (red, green, blue, cyan, magenta, yellow, white, and black) are available in the aforementioned system, which uses a microcomputer to control the lighting pattern of the 3-D LED array.Therefore, in addition to describing the multicolor system, we aim to achieve full-color representation by controlling the emission color of each volume element (voxel) of the LED array.For full-color representation, we used pulse-width modulation (PWM). 9,10n PWM, the light intensity gradation is represented by just controlling the on/off ratio of LEDs in a short period.That is, adjusting the lighting times of red, green, and blue LEDs enables full-color representation.Note that high-speed signal processing is necessary for achieving PWM.Therefore, we designed and developed a special-purpose control circuit of the LED array using a field-programmable gate array (FPGA), which operates at a higher frequency and is more suitable for parallel computing than microcomputers. Moreover, we verify the system's operation and demonstrate that it exhibits different full-color dynamic images in two orthogonal directions.This system represents a prototype for a directional display based on our previously proposed algorithm, [5][6][7][8] which allows multiple viewers to receive 2-D images independently and simultaneously. Hardware Design In this section, we describe the hardware design of the proposed systems: a multicolor volumetric display and a fullcolor volumetric display. Multicolor Volumetric Display System The LED-based volumetric display consists of two units: a display unit and a control unit.The LED array used as the display unit is composed of eight LED boards, on which 8 × 8 full-color LEDs are mounted, as shown in Fig. 2(a).For each LED board, 192 elements (Els) (64 voxels × 3 channels) can be controlled independently using only one serial input.Channels 1, 2, and 3 represent red (R), green (G), and blue (B), respectively.Here, the maximum frequency of the LED drivers is 25 MHz. These LED drivers are controlled by four 1-bit signals: clock (CLK), serial data input (SDI), latch (LA/), and output enable (OE/) in the following steps.First, the data are latched when LA/ is low.LA/ should be low while the data of SDI are input to the drivers.Next, the data of SDI are input serially from the red channel of voxel 1 to the blue channel of voxel 64 through the shift registers, as shown in Figs.2(a) and 2(b).SDI is sampled at the rising edge of CLK.When LA/ is driven high after the data of SDI are input, the data on the shift register go through.When OE/ is forced to low value, the outputs of the LED drivers are enabled and some of the LEDs turn on according to the input SDI.By doing so, arbitrary lighting patterns can be represented by only four 1-bit signals per one LED board, namely SDI, CLK, LA/, and OE/.We will explain how to control four signals in detail with a specific example below (shown in Fig. 3). As a control unit, we used the microcomputer board Arduino Mega 2560 (Arduino, LLC), which has 54 digital I/O pins and can be easily controlled by the Arduino programming language based on C/C++.The microcomputer generates control signals according to the source code written into the flash ROM of the board and sends them to the display unit to render 3-D images.The operation frequency of the microcomputer is 16 MHz. Figure 3 shows a specific example of the timing chart of processing.First, we describe the four control signals for an LED board.In the timing chart, Els 1, 2, and 3 correspond to R, G, and B channels of voxel 1, respectively, as described in Fig. 2(b).When OE/ is high, all the LEDs on the board emit no light.When OE/ is driven low, some of the LEDs emit light according to the signals of SDI; thus, all the voxels of the board have respective colors (red, green, blue, cyan, magenta, yellow, white, or black).We refer to the duration of light emission as displaying period.SDI is high only when it corresponds to El 1; thus, voxel 1 turns red during the displaying period.Similarly, voxels 2, 3, and 64 turn green, magenta (red + blue), and white (red + green + blue), respectively.It is difficult to control all eight boards simultaneously with one microcomputer because it is not suitable for parallel processing.Therefore, the microcomputer controls the eight boards sequentially. Full-Color Volumetric Display System In the multicolor system, which is the first prototype of the LED-based volumetric display, we used the microcomputer board as the control unit and succeeded in demonstrating that the display unit operates as intended.However, the display unit could represent only on/off states (1-bit) for each color channel due to the circuit structure of the LED array, as described in Sec.2.1.To represent full-color 3-D images, a control unit that realizes PWM is required.The performance of the display unit depends on the maximum operation frequency of the LED driver (25 MHz) and is sufficient for PWM.However, the microcomputer is not suitable for PWM because of its limited maximum operating frequency (16 MHz) and processing property (serial processing). Therefore, we designed a control unit with FPGA to achieve full-color 3-D image representation with PWM, as shown in Fig. 4. The host PC sends voxel data of the 3-D image to the control unit via a serial communication interface.The voxel data are color information comprising red, green, and blue components, each of which has 8-bit depth.The control unit generates control signals from the voxel data with PWM and sends them to the display unit.The display unit renders arbitrary 3-D images based on the received control signals.We developed software for controlling the host PC using Visual studio 2015 as a Windows Forms application. The control unit was implemented on an Atlys board (Digilent Inc.) 12 on which an FPGA chip Spartan-6 LX45 13 operating at 100 MHz is mounted.The control unit controls the display unit with digital I/O pins, which are enabled by an add-on breadboard (VmodBB, Digilent Inc.) attached on the Atlys board. Figure 5 shows the block diagram of the control unit.Each block is described in detail as follows: 1.The serial port controller block receives voxel data from the host PC via a serial port mounted on the Atlys board.The 1-bit serial data are stored in memory and sent to the gamma correction block 8-bit × 8-bit. The total number of data required for displaying a frame is 12,288 bits (¼ 8 bits × 3 channels × 512 voxels).The baud rate of serial communication is flexible and can be raised to 12 Mbps. 14In this system, we set the baud rate to 1.8 Mbps.This communication speed is sufficiently high for realizing a real-time display system (e.g., a system with a 60-Hz refresh rate).2. Typically, the output (light intensity) of a general 2-D display is proportional to the γ'th power of the input, with γ adjusted such that the color gradation appears natural to human eyes.Here, γ is called the gamma value and typically takes values from 1.8 through 3.0.The gamma correction block in our circuit raises the input to the 2.2th power.The input of this block is an 8-bit signal, which represents the gradation value of a channel (red, green, or blue).The block was designed to have a 9-bit output.The bit length of the output is 1 bit longer than that of the input to prevent information loss.The process is implemented as an 8-bit-in/9-bit-out look-up table .Here, each of the input and output of this block is a flexed-point number.Using the specific example shown in Fig. 6(a), the processing flow is described as follows.We consider the case where the RGB channels of gamma-corrected voxel 1 are 511, 255, and 127, respectively.The sequence of processes, which are detailed in the description of the multicolor system, should be repeated 511 times to display the frame of a full-color 3-D image.In the PWM block, the 9-bit counter (CNT) incremented at intervals of pulse width was implemented.By turning on the correspondent LEDs only when the 9-bit voxel data input are higher than CNT, the desired luminance proportional to the voxel data input can be achieved. Figure 6(b) shows the timing chart.In the case shown in Fig. 6(a), when CNT is between 1 and 127, all channels (RGB) of voxel 1 should turn on while OE/ is low.When CNT is between 128 and 255, the R and G channels of voxel 1 should turn on but the B channel should not.When CNT is between 256 and 511, only the R channel of voxel 1 should turn on.By doing so, the LED of voxel 1 emits light of the desired color (orange in this case). Design Algorithm This section describes the algorithm 5 used to determine the lighting pattern for the LED array exhibiting multiple images.Here, we describe the algorithm in the case where the array exhibits two images in orthogonal directions, as shown in Fig. 7.The volumetric display comprises P × Q × R volume Els (voxels), which correspond to 8 × 8 × 8 fullcolor LEDs in this study.The voxel value V ijk indicates the brightness of the LED at ði; j; kÞ and can be determined as follows: 1.Each of the original images is set up in the direction in which it is required to be exhibited.2. Perpendicular lines (blue lines in Fig. 7) are drawn from the voxel to images A and B. 3. V ijk is calculated as shown in Eq. ( 1), where a ij and b kj correspond to the pixel values of the original images A and B at the intersections with each perpendicular line E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 1 ; 3 2 6 ; 2 4 7 Now, we consider the images exhibited by the volumetric display with the determined voxels.We assume that the pixel values of the exhibited images are given by summations of the voxel values along the projection directions when we look at the display from a distance.Therefore, a 0 ij and b 0 ij , the pixel values of the exhibited images A and B, are represented as shown in Eqs. ( 2) and ( 3), respectively E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 2 ; 6 3 ; 6 7 5 a 0 E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 3 ; 6 3 ; 6 2 7 b 0 kj ¼ Note that the exhibited images are given by multiplying the original image and a background noise corresponding to the interference from the other image.The original image components in Eqs. ( 2) and (3) tend to be dominant over the background noise; thus, the exhibited images are recognized as the original images. 5On the other hand, this recognition occurs only when we look at the volumetric display from an appropriate viewpoint, i.e., the exhibited images have directional characteristics. By applying the above calculation procedure to each channel, the volumetric display exhibiting two full-color images can be designed.The full-color expandability of the algorithm has been reported in previous work using inkjet-printing technology. 15This study experimentally demonstrated that the algorithm can be applied to the case where complicated images (i.e., full-color photographs) were used as the originals. Evaluation of Light Emission To evaluate the full-color volumetric display system, we measured the luminance of a voxel (Y) with the input (X) from the host PC using a laser power meter (LP1, Sanwa Electric Instrument Co., Ltd.), as shown in Fig. 8(a).A regulated DC power supply provided constant voltage of 4.0 V to the display unit.The blue graph in Fig. 8(b) shows the normalized luminance when the voxel color is red.The result is in agreement with the theoretical value of the normalized Y ¼ X 2.2 , where X is a digital value sent from the host PC to the control unit and Y is a normalized luminance value.The root-mean-squared error between the experimental result and the theoretical value is almost zero (1 × 10 −3 ).We obtained almost identical results for the other voxel colors (green and blue).From these results, we verified the output of the developed system.See also Video 1.When the volumetric display was observed from viewpoints other than the front and side views, no meaningful images could be obtained. In particular, the viewing zones of the exhibited images were narrow.This shows that the developed system has directional characteristics. To demonstrate the full-color representation of the system, each frame was given a hue value 5 deg greater than that of the previous frame, as shown in Fig. 9, in which 1∕3 of all frames is displayed.Here, we set the saturation and brightness of the images to the maximum values.The hue-saturation-brightness color coordinates are converted to red-green-blue (RGB) values before communication to the system because the developed system is based on the RGB color model.In Fig. 9, the hue of the image increases from left to the right: H ¼ 15 deg, 30 deg, 45 deg, . . ., 360 deg.The multicolor volumetric display system could represent only eight colors, whereas the developed system can represent colors that could not be represented by the multicolor system, for example, orange and purple. Discussion First, we discuss the image exhibited by the developed volumetric display.In the images shown in Figs.9(b) and 9(d), the brightness differs according to the locations of the pixels.This difference was caused by the interference from the other image, as described in Sec. 3. We believe that such cross talk could be reduced using the iteration method proposed in a previous study. 8Moreover, the developed system, which has 8 × 8 × 8 voxels, is smaller in scale than the glass prototype shown in Fig. 1(a), which has 64 × 64 × 64 voxels.Therefore, the developed system could exhibit only simple images, e.g., a character.In future work, we will develop a large system to exhibit complicated images such as photographs.Moreover, we found a problem in the system that some voxels are hidden by the front black circuit board. A transparent circuit board will solve this problem.Next, we discuss the frame rate of the display.The number of Els per LED board is 192 (64 voxels × 3 channels).Therefore, it takes 7.68 μs for the control unit to send 192 data of SDI to the display unit with 25-MHz CLK (192∕25 MHz ¼ 7.68 μs).As mentioned in Sec.2.2, the pulse width (the duration of OE/) is set to the same amount of time as the time of sending the data of SDI (7.68 μs).Thus, it takes 7.85 ms in total to represent a frame because the cycle of sending the data and enabling output is repeated 511 times (2 × 7.68 μs × 511 ¼ 7.85 ms).When the baud rate of serial communication is 1.8 Mbps, the total communication time between the host PC and the control unit for all voxel data of a frame is ∼6.83 ms (¼ 8 bits × 3 channels × 512 voxels∕1.8Mbps).Because the communication time per frame is shorter than the computation time, the communication between the host PC and the control unit can be completed while representing the previous frame.Therefore, the communication time does not need to be considered to determine the frame rate of the display.As the result, the frame rate of the developed system is determined only by the computation time and is ∼127 Hz (¼ 1∕7.85 ms).Here, we discuss the limitation in the number of the voxels on the basis of the frame rate of general television (30 Hz).If the display operates at 30 Hz, the control unit developed in this study can control up to four times as many voxels as in the current system (127∕30 ≈ 4), for example, 16 × 16 voxels per four control signals. Finally, we discuss the designed control circuit.In this study, we succeeded in designing a simple and easy-todesign circuit of a full-color volumetric display as a first prototype.The present circuit could not realize a real-time display system when the number of voxels increases (more than 16 × 16 voxels per four control signals).This issue is more prominent in the field of volumetric displays than in the field of conventional 2-D displays, because the number of voxels increases on the cubic order.We will develop a higher-resolution system by increasing the number of control signals and implementing parallel processing.To this end, we will design and develop an LED-array circuit that can be controlled by a more effective operating scheme.For example, the use of full-color LEDs comprising control chips seems to be a good approach to realize an effective operating scheme. Conclusion In this study, we developed a multicolor volumetric display system as a first prototype.In addition, we designed and developed the control circuit of an LED array for realizing a full-color dynamic volumetric display.The developed control circuit, which was implemented on FPGA, is able to control the lighting pattern of the LED array in parallel and at a high speed.Thus, we achieved the representation of fullcolor dynamic images with a simple circuit structure.Moreover, we experimentally evaluated the system by measuring the luminance of a voxel with varying input and succeeded in demonstrating that the volumetric display exhibits two full-color dynamic images.This demonstration shows the future expandability of the algorithm proposed in the previous study. 5 These boards were obtained by deconstructing a commercially available product (3-D LED Cube MB8X, LEDGEND Technology Inc.).On each of the boards, 12 serial-in/parallel-out LED drivers (SCT2024 11 ), each of which has 16 parallel outputs, are connected in cascade as shown in Fig. 2(b). Fig. 1 Fig. 1 Three-dimensional structure exhibiting multiple images: (a) glass cube exhibiting three images.(b) LED-based volumetric display exhibiting two multicolor dynamic images: alphabet in the front view and numbers in the side view. Fig. 2 Fig. 3 Fig. 2 Display unit.(a) Schematic diagram of the LED boards.The volumetric display comprises eight LED boards.(b) Simplified block diagram of the LED boards consisting of the serial-in/parallel-out LED drivers. )Fig. 6 Fig. 7 Fig. 6 Specific example of the processing flow.(a) Description of the full-color representation using PWM and (b) timing chart of processing. 4. 2 Figure 9 Figure9shows a volumetric display exhibiting different fullcolor dynamic images in two orthogonal directions.As shown in Figs.9(a) and 9(c), one image is a string of alphabet (A to L) observed from the front of the display, and the other is a string of numbers (0 to 9) observed from a side.Different images are observed from different viewpoints, as shown in Figs.9(b) and 9(d).See also Video 1.When the volumetric display was observed from viewpoints other than the front and side views, no meaningful images could be obtained.In particular, the viewing zones of the exhibited images were narrow.This shows that the developed system has directional characteristics.To demonstrate the full-color representation of the system, each frame was given a hue value 5 deg greater than that of the previous frame, as shown in Fig.9, in which 1∕3 of all frames is displayed.Here, we set the saturation and brightness of the images to the maximum values.The hue-saturation-brightness color coordinates are converted to red-green-blue (RGB) values before communication to the system because the developed system is based on the RGB color model.In Fig.9, the hue of the image increases from left to the right: H ¼ 15 deg, 30 deg, 45 deg, . . ., 360 deg.The multicolor volumetric display system could represent only eight colors, whereas the developed system can represent colors that could not be represented by the multicolor system, for example, orange and purple. Fig. 8 Fig.8Measurement of the luminance of a voxel: (a) experimental setup and (b) measured luminance of a voxel.X is a digital value send from the host PC to the control unit and Y is a normalized luminance value.
5,061
2017-07-01T00:00:00.000
[ "Materials Science" ]
Mechanism of Filament Nucleation and Branch Stability Revealed by the Structure of the Arp2/3 Complex at Actin Branch Junctions Actin branch junctions are conserved cytoskeletal elements critical for the generation of protrusive force during actin polymerization-driven cellular motility. Assembly of actin branch junctions requires the Arp2/3 complex, upon activation, to initiate a new actin (daughter) filament branch from the side of an existing (mother) filament, leading to the formation of a dendritic actin network with the fast growing (barbed) ends facing the direction of movement. Using genetic labeling and electron microscopy, we have determined the structural organization of actin branch junctions assembled in vitro with 1-nm precision. We show here that the activators of the Arp2/3 complex, except cortactin, dissociate after branch formation. The Arp2/3 complex associates with the mother filament through a comprehensive network of interactions, with the long axis of the complex aligned nearly perpendicular to the mother filament. The actin-related proteins, Arp2 and Arp3, are positioned with their barbed ends facing the direction of daughter filament growth. This subunit map brings direct structural insights into the mechanism of assembly and mechanical stability of actin branch junctions. Introduction The Arp2/3 complex is a key cytoskeletal regulator of actin polymerization [1]. The complex promotes the assembly of dendritic actin networks that drive cell locomotion, phagocytosis, and intracellular motility of lipid vesicles, organelles, and invasive pathogens [2]. Conserved among eukaryotes, this seven-subunit, 220-kDa complex consists of two actin-related proteins, Arp2 and Arp3, and five additional subunits named ARPC1 through ARPC5. The isolated complex has a low nucleation activity, but upon binding to nucleation promoting factors (NPFs), ATP, and preexisting (mother) actin filaments, the Arp2/3 complex promotes the formation of a branched actin structure where the complex itself is situated at the branch junction [3,4]. Despite intensive study, the mechanistic details of the branch junction formation are still poorly understood, partly because of the lack of high-resolution information about the structure of the activated conformation of the complex at the branch junction. Two speculative models have been proposed for subunit organization of the Arp2/3 complex at these branch junctions. Information used for the modeling included sequence conservation among species, available biochemical and structural information, and, most important, the hypothesis that Arp2 and Arp3 assume an actin filament dimer-like configuration that templates the initiation of the daughter filament in the barbed end direction [5,6]. Another conceptually different model, derived primarily from kinetic analysis, suggested that the Arp2/3 complex induces branching and elongation at the barbed end of growing filaments with Arp2 and Arp3 being incorporated in two different actin filaments [7]. However, no direct structural data were available to support any of the proposed nucleation models. We provide here the first structural analysis to our knowl-edge of the Arp2/3 subunit organization at the branch junction at molecular resolution using genetic labeling, electron microscopy, and computational analysis. We show that various NPFs, except cortactin, dissociate from the complex after branch formation and that all of the Arp2/3 subunits are in a position to contact the mother filament. In contrast to the previous attempts to model the orientation of Arp2/3 within the actin branch, we have not assumed that Arp2 and Arp3 are orientated toward the daughter filament. Thus, our unbiased subunit localization provides direct evidence that Arp2 and Arp3 are positioned with their barbed ends facing the direction of daughter filament growth. Results/Discussion A direct observation of the complex within the branch junction at molecular resolution is required to better understand the mechanism of branched actin nucleation by the Arp2/3 complex. The general strategy that we have taken to achieve this goal was to assemble actin branches in vitro using a complex with one of the subunits carrying a label that can be detected by electron microscopy. The location of the label (and the corresponding subunit) in the image plane can be determined by difference mapping between the two-dimensional (2D) projection maps of branches assembled with labeled and unlabeled complexes. The WASp-Family NPFs, but Not Cortactin, Detach from the Arp2/3 Complex after Branch Formation We first employed this strategy to compare branch junctions formed in the presence of different NPFs of increasing molecular weight. Difference mapping would allow detection of the additional density contributed by the larger NPF, allowing localization of the NPF in the branch junction. We assembled actin branches with Saccharomyces cerevisiae Arp2/3 complex or Acanthamoeba complex in the presence of WASp-family NPFs of various sizes that contained the Arp2/3activating region WA (WASp homology 2 and acidic motifs) ( Figure S1). These were N-WASp WA (;12 kDa), glutathione-S-transferase (GST)-N-WASp WA (;40 kDa), WAVE1/Scar WA (;12 kDa), maltose binding protein (MBP)-tagged WAVE1/Scar WA (;55 kDa), full-length N-WASp (GST-N-WASp bound to its activator GST-Nck and forming a complex of ;153 kDa), and a non-WASp activator, cortactin. Projection images of the branches were boxed, aligned, and averaged to yield two-dimensional (2D) projection maps of the branch junction structure with a resolution of approximately 2.2 nm (Figure 1). The resolution was estimated based on the Fourier ring correlation criteria with a cut-off value of 0.5. Interestingly, no statistically significant differences (at 99.5% confidence level, p ¼ 0.005) were observed between the density maps of branches assembled with various WA proteins (12 to 153 kDa) ( Figure 1A-1F), whereas a clearly visible, statistically significant difference was observed with GST-cortactin (90 kDa) ( Figure 1G-1I). The ability to detect the activator was verified by difference maps using free activated complexes selected from the same electron microscope grids from which the branches were selected (X.-P. X., D. H., and N. V., unpublished data). The additional density attributed to cortactin was located on the obtuse side of the branch next to the main bridge of density on the daughter filament side ( Figure 1I). Cortactin enhances the persistence of lamellipodia protrusion during cell motility [8] and probably promotes this effect by stabilizing Arp2/3 branches induced by WAVE2/Scar2 [9]. Thus, the localization of cortactin at the branch junction provides a mechanism for stabilizing either the Arp-mother or the Arp-daughter interaction. We favor a stabilization of the Arp-mother interaction, as this would explain the nucleation promoting effect of cortactin on the Arp2/3 complex. However, the relatively weak signal observed with GST-cortactin construct precludes determination of the molecular nature of cortactin interactions with the mother or the daughter filament. Our localization positioned the construct density at a site consistent with the idea that cortactin might bind to the Arp3 subunit. The absence of WASp-family NPFs at the branch junction, as revealed by the difference maps, is consistent with the observation that N-WASp/WASp-coated beads undergo motility by cycles of binding, activation, and release of the Arp2/3 complex [10,11]. Localization of Arp2, Arp3, Arc40/ARPC1, and Arc18/ ARPC3 Subunits at Actin Branch Junction To locate Arp2/3 complex subunits in the branch junction by difference mapping, we took a genetic approach to introduce a label to individual subunits of the yeast Arp2/3 complex. Yeast genes encoding Arp2, Arp3, Arc40/ARPC1, and Arc18/ARPC3 subunits were tagged with green fluorescent protein (GFP) or yellow fluorescent protein (YFP) coding sequence at their genomic loci through homologous recombination. The C-terminus of each labeled subunit was separated from the label by an eight-amino-acid linker. The advantages of this strategy over the more traditional goldlabeling methods are that our strategy allows highly efficient labeling (;100%) of each subunit and convenient assessment of the functionality of the labeled complex. All four GFP/YFPtagged strains grew normally at room temperature (not shown) and 30 8C compared to the unlabeled (control) strain ( Figure 2A). The GFP label also contained a (His) 10 tag at the C-terminus, allowing purification of the labeled complex by Ni-NTA affinity ( Figure 2B). The unlabeled control Arp2/3 complex was also isolated by Ni-NTA affinity. The nucleation activities of these complexes were tested using the pyreneactin polymerization assay in the presence of GST-N-WASp WA. The labeled complexes exhibited the same level of nucleation activity as the unlabeled complex ( Figure 2C). Actin branches were assembled in the presence of the unlabeled complex or each of the labeled Arp2/3 complexes. Projection maps of the branch junction structures at a resolution of approximately 2.2 nm were generated ( Figure 3). Difference maps between branches obtained with the labeled complexes and the unlabeled complex were calculated ( Figure 3B and 3C). For cross-validation, each dataset was analyzed independently by two different operators using two different image analysis protocols ( Figure S2). All difference maps contain peaks in the branch junction that are statistically significant at a confidence level of 99.5% (p ¼ 0.005) using Student's t-test. The sizes of the peaks are consistent with the presence of an additional protein of the size of a GFP or YFP monomer (;30 kDa). In the 2D projection of actin branch junctions, the Arp2/3 complex forms three bridges of density between the mother and daughter filament: a strong bridge of density on the side of the acute angle, a weak bridge of density on the side of the obtuse angle, and a medium bridge of density in the middle (see Figure 1A) [3]. The difference maps between the projection densities obtained from the labeled complexes and the control complex showed that the YFP attached to Arc40/ARPC1 was present on the main bridge close to the mother filament, the GFP attached to Arp3 was on the same side but further away from the mother filament, and the GFP attached to Arc18/ARPC3 was located on the weak bridge close to the mother filament. The GFP attached to Arp2 generated two statistically significant peaks in the difference maps of Arp2-GFP: one located on the obtuse angle of the branch, and the other on the acute side. The two peaks correspond to two alternative stable positions of the GFP, because the population of the Arp2-GFP branches can be sorted into two clusters that each show only one peak ( Figure S3). Both GFP positions are compatible with the same Cterminus location owing to the length of the flexible linker ( Figure 3F). Determination of the Orientation of the Arp2/3 Complex at the Branch Junction by Computational Modeling For all difference maps, the peaks correspond to a projection onto the image plane (XY) of the respective center-of-mass position of the label. Despite the lack of information on the out-of-plane (Z) coordinates, we can use the XY coordinates of the centers of mass as efficient constraints for the three-dimensional (3D) orientation of the complex, because the XY projection of the C-terminus of each labeled subunit must fall within a distance defined by the length of the covalently attached linker and the topology of the label ( Figure 3E-3G). In the branch junction, all of the individual positions must be satisfied simultaneously. For example, the C-terminus of Arp2 needs to be in a position that allows attachment to GFP at both positions detected in the difference maps. This restricts the possible XY projection of the Arp2 C-terminus to the small area where the distance to both peaks is below the cut-off distance (i.e., the common area of the two circles in Figure 3F). Addition of constraints for the other subunits further reduces the number of compatible orientations. A global orientation search with the crystal structure of the inactive Arp2/3 complex [12] was carried out to map all orientations that are compatible with the given constraints. The results revealed only a single cluster of orientations that satisfied the label constraints ( Figure 4) with an estimated precision of approximately 1 nm. In all permissible orientations, domains I and III of both Arp2 and Arp3, corresponding to the fast growing (barbed) end in an actin filament, are facing away from the mother filament toward the daughter filament. The relative orientations of Arp2 and Arp3 would need to change from that in the inactive structure in order to provide a suitable template for the growth of the daughter filament. The exact nature of these changes are unknown, but the amplitudes of the changes detected so far [13] are small enough to argue against massive subunit rearrangements such (B) Purified yeast Arp2/3 complexes visualized by SDS-PAGE and Coomassie blue staining; unlabeled (control), and GFP-or YFP-labeled Arp3, Arp2, Arc40, or Arc18 complexes. The labeled subunits are marked by arrowheads. The Arc40 subunit in the labeled Arc40/ARPC1-YFP complex ran as 30-kDa and high-molecular-weight species (previously confirmed by immunoblotting and peptide sequencing), owing to an unusual electrophoretic mobility [16]. The Arp3 subunit of the unlabeled complex is denoted by an asterisk, and the Arp3 subunit of labeled Arc40/ARPC1-YFP complex is denoted by two asterisks. (C) Pyrenyl-actin polymerization kinetics obtained with actin alone (black), control complex (light blue), Arp3-GFP complex (red), Arp2-GFP complex (purple), Arc40/ARPC1-YFP complex (green), and Arc18/ARPC3-GFP complex (dark blue). DOI: 10.1371/journal.pbio.0030383.g002 as dissociation of Arp2 and rebinding to Arp3 in a long-pitch filament conformation. The preservation of overall topology of the complex is supported by the fact that all of the constraints obtained in this study for the positions of the labels can be satisfied without the need to introduce changes in the relative orientation of the subunits in the inactive complex. Consistent with a conformational change upon activation, the relative orientation of both Arp2 and Arp3 would need to be altered in our model to provide an exact match of the daughter filament with the direction of its projection density. This conformation could be achieved by an approximately 158 rotation of Arp3 around its short axis and an approximately 158 rotation of Arp2 around an axis parallel to its short axis passing through domain I of Arp3, accompanied by a slight adjustment of the overall complex orientation (,58). Even though this rearrangement corresponds to a substantial conformational change in the Arp2/3 complex, these rotations would be fully compatible with the constraints imposed by the labels, leading to displacements of the labeled C-termini projections by less than the estimated precision (,1 nm). The resulting daughter filament would not only grow parallel to the XY plane and coincide with the direction of the daughter filament in the projection maps but also fit the shape of the projection density remarkably well ( Figure 4E and 4F). Domains II and IV of Arp3 (orange in Figure 4) are well positioned to make direct contact with the mother filament. The other Arp2/3 subunits are also in a position to contact the mother filament, with Arp2 (pink) being the furthest away from the mother filament ( Figure 4A-4D). The fact that Arp3 is close to the mother filament was not apparent from the projection images alone [3] and could not be inferred from other available data. The data from subunit labeling, in conjunction with the crystal structure of the isolated complex, allowed a much more detailed and accurate assignment of the densities than previously possible and indicate that the previous assignment was one unit off (i.e., the previous Arp3 position corresponds to Arp2 and Arp2 to the first actin monomer in the daughter filament). Conclusions The data presented here support the model that Arp2 and Arp3 adopt an actin short-pitch dimer-like configuration that templates the initiation of the daughter filament in the barbed end direction. The data are incompatible with the proposed incorporation of Arp2 and Arp3 into two different actin filaments at the branch junction [7]. The two available hypothetical structural models of the branch junction [5,6] (illustrated in Figure 4G and 4H) relied on the assumption that the barbed ends of Arp2 and Arp3 face the daughter filament to orient the complex within the branch junction. In contrast, the labeling-based model presented here did not use this assumption as a constraint, and therefore our results lend unbiased evidence to the proposed mechanism where Arp2 and Arp3 serve as a template dimer for the barbed-enddirected growth of the daughter filament. Additionally, our localization data are incompatible with the positions of the Ctermini of the subunits proposed in both these previous models (compare Figure 4A with 4G and 4H). These models suggested that the longest axis of the complex, comprising ARPC1, À5, À4, and À2 (Arc40, À15, À19, and À35 in the yeast Arp2/3 complex), contacts the side of the mother filament ( Figure 4G and 4H). Our model deviates from these models by an anticlockwise rotation of approximately 1008 around the axis of the daughter filament, resulting in an alignment of the longest axis almost perpendicular rather than parallel to the mother filament ( Figure 4A). This geometry could allow comprehensive interactions between the axis formed by ARPC2/4 (with possible contribution from ARPC5) and a groove of the mother filament, with Arp3 and ARPC3 on one side and ARPC1 on the other side to provide stabilizing interactions that would prevent the complex from rocking horizontally as well as vertically against the mother filament. In summary, our model provides the structural basis for the mechanical stability of branch junction that is important for effective force generation upon filament elongation at the barbed ends. It is fully consistent with the available biochemical data and the growth direction of the daughter filament and directly supports the template-dimer model of Arp2/3-mediated actin nucleation. The subunit map established in this analysis thus provides a new structural framework for further understanding the spatial and temporal control of branch nucleation and turnover in the generation of an advancing dendritic network that drives protrusive cellular movement. Materials and Methods Plasmids, genetic manipulations, and yeast strains. Yeast strains expressing C-terminal GFP-or YFP-labeled Arp2/3 complex subunits were generated by homologous recombination by the integration of a cassette containing a linker (GDGAGLIN), the yEGFP (or yECitrine) coding sequence, and a polyhistidine (His) 10 tag at the 39 end of each open reading frame. The cassette was generated using pCE36, a derivative of pKT128 [14]. Strains used in this study are listed in Table 1. Proteins. Actin was purified from rabbit muscle and isolated as Ca 2þ -ATP-G-actin in G buffer (5 mM Tris-Cl [pH 7.8], 0.1 mM CaCl 2 , 0.2 mM ATP, and 1 mM DTT) according to Pardee and Spudich [15] and pyrenyl labeled. The yeast Arc40/ARPC1-YFP Arp2/3 complex was isolated from a strain expressing an Arp3-CaMBM-tev-ProtA subunit (RLY1945) as previously described [16]. The unlabeled control complex (Arp3MH, which has a [Myc] 5 His 6 tag on Arp3 [16]) as well as the Arp3-, Arp2-, and Arc18/ARPC3-GFP-His 10 -labeled complexes were isolated as follows. Yeast cells were grown to mid log phase in YPD medium (OD 600 2À4) washed in U buffer (50 mM HEPES [pH 7.5], 100 mM KCl, 3 mM MgCl 2 , and 1 mM EGTA) and stored at À80 8C until use. A 50-to 100-g cell pellet was resuspended in five volumes of cold U buffer supplemented with 0.5% Triton X-100, 0.2 mM ATP, 1 mM DTT, and protease inhibitor mix (0.5 lg/ml antipain, leupeptin, pepstatin A, chymostatin, and aprotinin, and 1 mM PMSF) and passed through a microfludizer (Microfluidics, Newton, Massachusetts, United States) until 70% lysis was obtained. The cell extracts were cleared by centrifugation at 100,000 3 g for 1 h and filtered through a 0.45-lm filter. A 60% ammonium sulfate precipitation of cell extracts was performed, and the pellet was dialyzed into NaP buffer (100 mM phosphate [pH 7.8], 100 mM KCl, and 20 mM imidazole). This fraction was cleared by centrifugation and incubated with Ni-NTA agarose beads (Qiagen, Valencia, California, United States). Beads were washed with NaP buffer, NaP buffer plus 0.5 M KCl, and NaP buffer plus 0.5% Triton X-100, and the complex was eluted with 250 mM imidazole. The complex was further purified through a HiTrapS column (Amersham Biosciences, Little Chalfont, United Kingdom) in 50 mM MES (pH 6.5), 25 mM NaCl; a UnoQ1 column (Bio-Rad, Hercules, California, United States) in U buffer; and a Superose 12 gel filtration column (Amersham Biosciences) in U buffer on a BioLogic chromatography system (Bio-Rad). Acanthamoeba Arp2/3 complex was purified by poly(L)-proline [18] and gel filtration chromatography as described [19]. Purified complexes were immediately used to assemble actin branches or stored in U buffer [6] (G) and by Aguda et al. [5] (H) shown for comparison. Note that in (G), the daughter filament will be oriented out of the paper plane toward the reader. (I)Arp2/3 crystal structure in the same orientation as originally presented in Robinson et al. [12]. DOI: 10.1371/journal.pbio.0030383.g004 supplemented with 0.2 M sucrose, flash frozen in liquid nitrogen, and stored at À80 8C. Bovine GST-N-WASp WA, bovine GST-N-WASp, murine GST-cortactin, and murine GST-Nck were purified as previously described [20,21]. MBP-Scar1 WA was generated by fusing Scar1 S495-C559 to MBP followed by a C-terminal His 6 Electron microscopy. Freshly purified Arp2/3 complexes were used to assemble actin filament branches, which were applied to glowdischarged EM carbon-coated grids and stained with 2% uranyl acetate. Images were recorded under low-dose conditions at a magnification of 42,000 and at a defocus of 1.8 lm using a Tecnai 12 G2 microscope (FEI, Hillsboro, Oregon, United States) equipped with a Lab6 filament at 120 kV and a 1,024 3 1,024 MSC 600HP (model 794; Gatan, Pleasanton, California, United States). The pixel size was 0.57 nm. Branches were selected and boxed using EMAN [22]. Image analysis was performed independently by two different experimentalists (I. R. and X.-P. X.), using two different image analysis packages: Spider [23] and EMAN [22]. Results were compared only at the end of the analysis. Image processing and cross-validation. For alignment using Spider, selected branches were aligned with a reference-based alignment procedure using standard alignment protocols implemented in Spider [23]. The initial reference was a well-stained branch chosen from the dataset. After alignment, branches were inspected visually, outliers (branches that obviously were not aligned) were discarded, and aligned branches were averaged. This new average was used for another round of alignment. This process was repeated until no more changes were observed (typically three or four times). For several datasets (Arp2-GFP, full-length N-WASp, and cortactin), three different initial references (two different branches and the average obtained with the control) were used. Comparison of the different final averages for individual datasets showed that they were practically identical (and the difference map between the average and the control maps showed the same difference peaks), i.e., the final average was not biased by the choice of the initial reference. The other datasets were aligned to one or two initial references and the results were cross-validated with the results from EMAN (see below). For alignment using EMAN, for the dataset of the unlabeled yeast Arp2/3 branches in the presence of N-WASp WA and amoeba Arp2/3 complex in the presence of Scar WA, initial references with good quality (straight and with high contrast) were picked from the respective dataset. Projection maps were generated using the correlation-based iterative alignment algorithm and outlier screening implemented in EMAN [22]. To further reduce model bias, the procedure was repeated for nine different references each. The final projection maps were generated by aligning and averaging the respective nine maps. For all other datasets, the final projection map of either the amoeba complex in the presence of Scar WA (for amoeba-based samples) or yeast complex in the presence of N-WASp WA (for yeast-based samples) was used as the initial reference. Averaging and significance testing. The aligned images selected for averaging (separately for the Spider and EMAN sets) were normalized and averaged using routines from CoAn [24]. CoAn was also used to calculate the difference maps and the standardized variance maps that are suitable for input to Student's t-test procedures [25]. All tests were performed at a confidence level of 99.5%. All peaks presented were statistically significant and virtually at the same location in the two independent image analyses. Fitting of constraints and precision estimate. In order to computationally fit the constraints obtained by labeling, we adapted routines from the CoAn package [24] that were previously used in the context of density fitting and subsequent evaluation of 3D real-space constraints derived from mutagenesis and biochemistry experiments [26]. The routines, which perform a global scan of the orientations, were modified to handle 2D constraints. After applying a rotation to the crystal structure of the inactive Arp2/3 complex, the positions of the four C-termini were projected onto the XY plane. Then, a translational least-squares fit between the projected C-termini and the respective constraints (in-plane positions of the labels, one constraint each for Arc40/ARPC1, Arc18/ARPC3, and Arp3 and two for Arp2) was performed. Next, the distances between the C-termini projections and the respective constraints were tested using a predetermined cut-off value. If the distance was below this value, the orientation was kept for further processing. A complete global scan using a 108 increment with this configuration completes within 3 min on an Athlon Opteron dual processor box. An advantage of a global scan versus the more traditional least-squares fitting is that all solutions that satisfy the constraints are mapped and can be used for solution set analysis similar to that used for density-based docking [27]. To determine an estimate for the uncertainty of the orientation in three dimensions, we used the following procedure. The length of the linker and the 3D structures of GFP and YFP determine that the (projected) distance between the respective C-terminus and the difference peak (assumed to represent the center of the GFP/YFP) can be anywhere between 0 and 6 nm. A priori, we do not know which value to choose, but we can use the following argument to find the most appropriate cut-off. The set of 3D orientations that satisfy the constraints at a certain cut-off value can be used to calculate a central orientation (centroid) that minimizes the average root-mean-square deviation to all other members of the solution set. If the cut-off value is too small, the constraints are too tight and the centroid will be biased toward the tightest constraint. If the cut-off value is too large, the centroid will not change, but the solution set will be too large and give unrealistically large estimates of precision. Thus, the most appropriate cut-off distance is the smallest value that still gives the same centroid orientation as larger values. The solution set from this value can be used to get an estimate of the precision for the orientation determination by calculating the average root-meansquare deviation in the set. Using this procedure with test cut-off values between 1 and 6 nm, we found that the most appropriate cut-off value is 3.9 nm. The centroid orientation (which is the one presented in Figure 4) has an average in-plane distance between the C-termini and the respective peaks of 2.43 nm (Arp3: 2.59 nm; Arp2: 2.98 and 3.48 nm; Arc40/ ARPC1: 1.30 nm; Arc18/ARPC3: 1.78 nm; see also Figure 3). The precision of the 3D orientation was estimated from the solution set as 0.99 nm. Molecular graphics. In Figure 4, the low-resolution representations of the Arp2/3 complex were generated from the crystal structure [12]. Coordinates for domains I and II of Arp2 are not available owing to disorder in the crystal structure. We substituted these two domains by subdomains 1 and 2 of an actin monomer [28] after least-squares fitting of subdomains 3 and 4 of actin to domains III and IV of Arp2. Representation of atomic models and densities was done using Pymol (http://www.pymol.org). Figure 3C and 3F). For each peak, the density within the peak area was measured for every aligned branch image. This resulted in distinct bimodal distributions with one peak at high values (peak present) and another at low values (peak absent). The bimodal character of the distribution indicates that we indeed have a systematic difference; otherwise, a single Gaussian distribution would occur. The averages were then calculated from the subpopulation with high values only.
6,421.8
2005-11-01T00:00:00.000
[ "Biology" ]
Probabilistic assessment of load-bearing capacity of deep beams designed by strut-and-tie method This paper presents probabilistic assessment of load-bearing capacity and reliability for different STM of deep beams. Six deep beams having different reinforcement arrangement obtained on the basis of STM but the same overall geometry and loading pattern were analysed. The used strut-and-tie models for D-regions of analysed elements have been verified and optimised by different researchers. In order to assess load-bearing capacity of these elements probabilistically, stochastic modelling was performed. In the presented probabilistic analysis of deep beams designed, the ATENA software, the SARA software and the CAST (computer-aided strut-and-tie) design tool were used. The reliability analysis shown that STM optimization should be a multi-criteria issue so that the obtained models were characterized by optimal stiffness with the assumed volume or weight and maximum reliability. Introduction The use of ST models for the design of reinforced concrete structures has a very long history and is practically inseparable from the history of reinforced concrete structures. The strut-and-tie model method is especially used in the design of D-regions where the Bernoulli hypothesis does not apply. An STM idealizes a complex force flow in the structures as a collection of compression members (struts), tension members (ties), and the intersection of such members (nodes). However, strut-and-tie modelling techniques have been extensively investigated since a comprehensive work was reported by Schlaich et al. [1], the standard recommendations and the literature do not provide rules allowing to unambiguously determine the shape and direction of elements in the ST method. Many different types of techniques and algorithms have been proposed by dozens of researchers and the selection of the optimal model is the subject of many scientific works published in recent years. In these works, different criteria for optimization of ST models are used, usually with omitting the reliability assessment of the obtained model. Design of safe structures should be the overriding objective, since the reliability of the structure is closely related to the ways of treating uncertainty and making decisions in the initial design phase. In this paper, six deep beams having different reinforcement arrangement obtained on the basis of STM but the same overall geometry and loading pattern were analysed. In order to assess load-bearing capacity of these elements probabilistically, stochastic modelling was performed. In the paper, the method of randomization of variables during the Monte Carlo simulation was applied. The Latin Hypercube Sampling (LHS) method was selected, in order to reduce the number of simulations to an acceptable level. ST models of deep beams The analysed deep beam with a rectangular opening is shown in Fig 1. Fig. 1. Geometry and dimensions of analysed deep beam This element was utilized by Novak and Sprenger [2] as a strong example for the application of strut-and-tie modelling of reinforced concrete structures. Deep beam as a whole is considered a D-region due to geometric and force discontinuity. In the analysed deep beam, the reinforcement was formed on the basis of six ST models. The strut-and-tie model -T1 is shown in Fig. 2. It was proposed by Novak and Sprenger [2]. It behaves as a socalled beam-on-beam, that is, an upper span is supported by a lower span. After the Novak model, Reineck [3] proposed and investigated several different strut-and-tie models. One of the models analysed by Reineck was a model obtained through a frame analysis in which the upper beam is symmetrically supported and the lower beam- Fig.3. This is the second model analysed in this paper, marked as T2. [3] In the next years, this classical example of D-region has attracted widespread attention from other researchers. Ley et al. [4] developed some other STMs and conducted a series of experiments to verify the application of strut-and-tie modelling. Two of the STM analysed by Ley et al. are shown in Fig. 4 i 5. They are the third STM of the analysed deep beam marked as T3 and the fourth STM built using the load path -marked as T4. The next STM of deep beam -T5 was proposed by Herenza et al. [5]. In this model the Full Homogenization (FH) optimization method to determine the shape of strut-and-tie model was used - Fig. 6. The last of the analysed STM of the deep beam-T6 was the model proposed by Zhong et al. [6]. The Ground Structure Method (GSM) was used to generate this strut-and-tie model, shown in Fig. 7. The reinforcement in the analysed deep beams were designed by the computer aided strut-and-tie (CAST) design tool [7]. The CAST is a graphical design tool that allows the user to customize D-regions, draw an internal truss, check the nodes, select the width of strut members and tie reinforcement. In the next steps, numerical simulations of the analysed six deep beams by means of the nonlinear mechanics software ATENA [13] were performed. A numerical model was considered in two-dimensional stress state. To solve static problems of reinforced concrete deep beams, calculation procedure based on Newton -Raphson iterative method was applied. Newton-Raphson method keeps the load increment unchanged and iterates displacements until equilibrium is satisfied within the given tolerance. To model the concrete the material model SBETA proposed by ATENA (Advanced Tool for Engineering Nonlinear Analysis) was used. The material model SBETA includes the following effects of concrete behaviour: −non-linear behaviour in compression including hardening and softening; −fracture of concrete in tension based on the nonlinear fracture mechanics; −biaxial strength failure criterion; −reduction of compressive strength after cracking; −tension stiffening effect; −reduction of the shear stiffness after cracking; −two crack models: fixed crack direction and rotated crack direction. The basic constitutive characteristics of concrete are shown in Table 1. For modelling the main reinforcement, the material model "reinforcement", proposed by ATENA was used. The model of elastic-plastic material, with characteristics corresponding to steel RB500W, was used. The numerical model deep beams -T4 is shown in Fig.8. Fig.9 shows the dependence between load and displacement of the point situated at the centre of span obtained for all tested deep beams. To validate the numerical model, experimental results of the deep beam -T4 made in scale 1:10.5 presented in the literature [5] are used. Failure load [kN] Experimental research 55.2 Numerical simulations 59.9 The observed differences are caused by incomplete information about the materials used in experimental studies and the differences in the geometry of numerical and experimental deep beams. The numerical models were made in scale 1:10. Stochastic modelling Behaviour of deep beams under load was analysed in detail by a stochastic modelling. The objective was to find out impacts of type of STM and some input data onto the bearing capacity of deep beam. For timeintensive calculations, the small-sample simulation techniques based on stratified sampling of Monte Carlo type represent a rational compromise between feasibility and accuracy. Therefore in presented simulations, Latin hypercube sampling (LHS) was selected as a key fundamental technique. The basic feature of LHS is that the probability distribution functions for all random variables are divided into NSim equivalent intervals (NSim is a number of simulations); the values from the intervals are then used in the simulation process (random selection, middle of interval or mean value). This means that the range of probability distribution function of each random variable is divided into intervals of equal probability. The samples are chosen directly from the distribution function based on an inverse transformation of distribution function. The representative parameters of variables are selected randomly, being based on random permutations of integers 1, 2, ..., j, NSim. Every interval of each variable must be used only once during the simulation - Fig.10. Being based on this precondition, a table of random permutations can be used conveniently, each row of such a table belongs to a specific simulation and the column corresponds to one of the input random variables [8]. where fi is the probability density function of variable Xi and the integration limits are (2): The estimated mean value is achieved accurately and the variance of the sample set is much closer to the target one [14]. In the analysis the random character of input dataconcrete and steel was assumed. For each the deep beams 50 simulation were performed with modified statistic parameters. Statistic parameters were described using the recommendations specified in JCSS [9], ISO [10] and [11]. The input values should be properly described, e.g. with a mean value, coefficient of variation, or type of distribution. The distribution and coefficient of variation -COV for the input variables of concrete and steel are shown in Table 3. The Fig.11 shows exemplary histogram for 50 randomized parameters generated by using the LHS technique. In analysis the correlation between parameters of concrete were taken. Table 4 shows the correlation matrix used for the concrete in the stochastic modelling. Expected values of the correlation matrix are shown in the right part of matrix. Values obtained by simulated annealing for one of deep beams shown in the left part of matrix. The stochastic modelling was carried out using SARA [12] software application. Example results of the analysis for one of the six deep beams are shown in Fig.12. In the diagram dependence of load-displacement obtained for all tasks generated for the deep beam T2 are shown. The mean value of the ultimate load-P, confidence level, standard deviation, COV, coefficient of skewness, kurtosis, the upper value -Psup, the lower value -Pinf and reliability index -βc for load bearing capacity are compared in tables 5 and 6. Than Pinf is the 5% fractile and Psup is the 95% fractile of the statistical distribution for P. An important task in the structural reliability analysis is to determine the significance of random variableshow they influence a response function of a specific problem. A sensitivity analysis can answer the question "what variables are the most important ?". In probabilistic assessment of the deep beams the sensitivity analysis based on the comparison of partial coefficient of variation of the structural response variable with variation coefficient of basic random variables were performed. The rank-order statistical correlation is expressed by the Spearman correlation coefficient and nonparametric rank-order correlation coefficients are calculated between all random input variables and response variables by formula (3): where di is the difference of the order of the components in sequenced statistical files and n is the range of the statistical file The sensitivity analysis between variables and ultimate load, for the deep beams are compared in Table 7. In most cases, the concrete properties had biggest impact on the structure response. A high positive correlation coefficient, more than 0.9, indicates that the response or limit state function is very sensitive to that particular variable. For the compression strength of concrete, in the case of the deep beam T4, correlation coefficient is smaller than in other analysed deep beams. After statistical analyses the reliability analyses were carried out. The limit state function -Z (margin of safety) was formulated (4). This function is a difference between resistance -R and load effect -E. According to the original assumptions, the load effect applied to the considered deep beams was 25 kN. For this load effect, probabilistic description by means of normal distribution with COV 0.15 is used. In this case, reliability analysis methods employing Cornell´s reliability index -βc and corresponding failure probability (Cornell -pf ) were carried out. Estimation of Cornell´s reliability index requires the estimation of basic statistical characteristics of safety margin. The Cornel reliability index expresses the formula (5): βc= (5) where μZ and σZ are the mean value and the standard deviation of the safety margin Z. The estimated reliability index and the adopted efficiency indexes for six deep beams are compared in table 8. Efficiency indexes were defined as the ratio between the load bearing capacity to the reinforcement mass and the ratio of the reliability index to the reinforcement mass. The largest value of reliability index and the highest load bearing capacity for the deep beams T2 and T5 ware obtained. On the other hand, the T2 deep beam is the least economical as it has the largest reinforcement mass among all the deep beams being analysed. Analysing results presented in the table 8, it can be seen that in the case of the deep beam-T4 with reinforcement obtained on the basis of the load path, the reliability index doesn't meet the requirements for the RC 2 class and T =50 years, defined in PN-EN-1990 (β ≥ 3.8 ). The largest value of efficiency indexes, i.e. the ratio between load bearing capacity to the reinforcement mass and the ratio of the reliability index to the reinforcement mass were obtained for the deep beam T6. Conclusion This paper presents probabilistic assessment of loadbearing capacity and reliability for different STM of deep beams. In the presented probabilistic analysis of deep beams designed by strut-and-tie method, the ATENA software, the SARA software and the CAST (computer-aided strut-and-tie) design tool were used. Summing up the results of the analysis, the following detailed conclusions can be formulated: • structures designed on the basis of different ST models are characterized by different load bearing capacity and different mass of reinforcement. The mean value of load bearing capacity ranged from 55.3 kN for the deep beam T1 to 64.6 kN for the deep beam T2. The observed differences in the value of maximum output is around 15%, and in each analyzed case load capacity requirement was met. Significant differences (around 80%) were observed in the mass of reinforcement required to design the deep beams; • the result of the stochastic simulations shows that the coefficients of variation are similar for most tested deep beams and these is 10%. Only for the deep beam T4 the higher coefficient of variation -14% is obtained; • the strut-and-tie models are also characterized by different reliability structures. The similar value of reliability index for resistance were obtained for the deep beams T1, T3 and T6 and these are about 10 but for the deep beam T4 reliability index is only 7.25. • the reliability analysis shows that in the case of the deep beam-T4 with reinforcement obtained on the basis of the load path, the reliability index doesn't meet the requirements for the RC 2 class and T = 50 years. In the other analysed cases the Cornel reliability index ranges from 4.63 to 5.14 and reliability requirements defined in PN-EN-1990 are met. • the largest value of efficiency indexes i.e. the ratio between load bearing capacity to the reinforcement mass and the ratio of the reliability index to the reinforcement mass were obtained for the deep beam T6 with STM on the basic of the Ground Structure Method. In conclusion, as ensuring safety of a structure should be the primary goal, STM optimization should be a multi-criteria issue so that the obtained models were characterized by optimal stiffness with the assumed volume or weight and maximum reliability.
3,460.8
2019-01-01T00:00:00.000
[ "Engineering" ]
Giant Superlinear Power Dependence of Photocurrent Based on Layered Ta2NiS5 Photodetector Abstract Photodetector based on two‐dimensional (2D) materials is an ongoing quest in optoelectronics. 2D photodetectors are generally efficient at low illuminating power but suffer severe recombination processes at high power, which results in the sublinear power‐dependent photoresponse and lower optoelectronic efficiency. The desirable superlinear photocurrent is mostly achieved by sophisticated 2D heterostructures or device arrays, while 2D materials rarely show intrinsic superlinear photoresponse. This work reports the giant superlinear power dependence of photocurrent based on multilayer Ta2NiS5. While the fabricated photodetector exhibits good sensitivity (3.1 mS W−1per □) and fast photoresponse (31 µs), the bias‐, polarization‐, and spatial‐resolved measurements point to an intrinsic photoconductive mechanism. By increasing the incident power density from 1.5 to 200 µW µm−2, the photocurrent power dependence varies from sublinear to superlinear. At higher illuminating conditions, prominent superlinearity is observed with a giant power exponent of γ = 1.5. The unusual photoresponse can be explained by a two‐recombination‐center model where density of states of the recombination centers (RC) effectively closes all recombination channels. The photodetector is integrated into camera for taking photos with enhanced contrast due to superlinearity. This work provides an effective route to enable higher optoelectronic efficiency at extreme conditions. Introduction Optoelectronic devices based on twodimensional (2D) materials have attracted intense research attention owing to their excellent performances of high sensitivity, [1,2] fast response time, [3,4] and high electron mobility. [5,6] The photoconductive detector is one of the most stable optoelectronic devices with broad working bandwidth, [7] high responsivity, [8] and high gain. [9] The photoresponse of this device is mainly determined by material properties due to the simple structure and physical mechanism. When semiconductor material absorbs incident photons, whose energy is equal to or greater than the bandgap, photon-generated electrons and holes will be separated in opposite directions and collected by the electrodes with an external bias. The photocurrent (I ph ) increases as a function of incident power (P) following a power-law dependence of I ph ∝P . The power exponent ( ) varies between different materials because of electron-hole www.advancedsciencenews.com www.advancedscience.com generation, trapping, recombination process, and other mechanisms. [10][11][12][13] For the ideal case, a linear increase of photocurrent with incident power is expected ( = 1) since the photocurrent is solely determined by the photogeneration of electron-hole pairs. [14][15][16][17][18][19] In most 2D-based devices, I ph exhibits a sublinear power dependence under high-intensity illumination due to dominating contribution from defects and impurities. As light intensity increases, those defects serve as effective recombination centers (RC) and capture more photocarriers which lead to the saturation of photocurrent ( < 1) [15,[20][21][22] and decreased responsivity. As for superlinear power dependence ( > 1), it is found in comparatively rare cases and features increased photoresponsivity with power. [23,24] Recently, the superlinear power-dependent photocurrent was reported in a series of artificial 2D structures such as graphene/h-BN, [25] graphene/WSe 2 , [26] WS 2 /MoS 2 [27] heterojunctions, and sheet array. [28] The typical origin of superlinearity from heterostructure devices is the photothermionic effect, where hot carriers are injected from the gate side to overcome the Schottky barrier exponentially as external injection bias increases, resulting in significantly extended spectral bandwidth and responsivity. [25,26,29] Meanwhile, the multicenter Shockley-Read-Hall process [28,30] also contributes to the superlinear response in arrayed structures such as printed MoS 2 and GaTe transistor arrays because the array structure keeps photocarriers from massive recombination at high luminous power. [30] The desired superlinear photoresponse is mainly achieved by sophisticated 2D heterostructures and arrays. [25,29] However, as the building block of those 2D artificial structures, the 2D materials rarely show intrinsic superlinear photoresponse. Even within the existing cases, the superlinearity is weak with power-law exponent generally lower than 1.1. Hereafter, we define "homogeneous 2D material" [31] as those single 2D material that contrasts the heterostructures and arrays. Homogeneous 2D material with stronger intrinsic superlinearity (higher ) is desired which potentially allows for stronger optoelectronic efficiency at the high power regime and enables better performance if integrated into the discussed sophisticated structures. In this work, we report the prominent and intrinsic superlinear power dependence of photocurrent based on homogeneous Ta 2 NiS 5 at ambient condition. The photodetector manifests itself with a simple metal-Ta 2 NiS 5 -metal structure. Bias-dependent and spatial scanning photocurrent measurements suggest the photoconductive origin of the photoresponse so that the photocurrent is determined by the intrinsic material property of Ta 2 NiS 5 . The photoconductive devices feature a fast response of 31 μs, along with good sensitivity of 3.1 mS W −1 per □, and polarizationsensitive anisotropy. At the low intensity regime (1.5 − 15 μW μm −2 ), photocurrent shows conventional sublinear power dependence. Upon increasing the power density (15 − 200 μW μm −2 ), the photocurrent becomes weakly superlinear. With illuminating power density higher than 200 μW μm −2 , strong superlinear power dependence is found with a giant power exponent = 1.5 for the homogeneous 2D material. Different from the previous report [32] where the capture cross-section plays a major role in determining the weak superlinearity, here, the unusual strong superlinearity requires the presence of RC with distinct density of states. We present a two-RC model to capture the main finding of the experiments which is further quantitatively proved by the multiparameter fitting. The fabricated Ta 2 NiS 5 device is tested for taking photographs. The image contrast is clearly enhanced due to the superlinearity of the device. Our work sheds light on the superlinear photocurrent which allows enhanced optoelectronic performance of photoconductive devices at high illuminating power. Results and Discussion Ta 2 NiS 5 crystallizes in the orthorhombic system (space group Cmcm, D 17 2h ) as shown in Figure 1a, which is composed of layers stacking along b-axis. Each layer consists of the periodically arranged [TaS 6 ] 2 chains and NiS 4 chains. The armchair structure runs along the a-axis leading to the quasi-one-dimensional structure [33,34] along with the resultant anisotropic electronic and optical characteristics. [35,36] The high-quality Ta 2 NiS 5 crystals are prepared by chemical vapor transport method (Figure 1b) with temperature gradient of 6°C cm −1 . The needle-like crystals (Figure 1c) are found in the cold end with shiny surfaces. More details can be found in the Experimental Section. As shown in Figure 1d, the copper target X-ray diffraction (XRD) pattern of the as-grown Ta 2 NiS 5 crystal is performed to evaluate the crystal structure and orientation. The prominent peaks at 14.6°, 29.4°, and 44.8°originate from the (010) plane. The extracted lattice constant b is 12.11 Å. The inset presents the full width at half-maximum (FWHM) of 0.16°. The lattice properties and anisotropic characteristics can be further examined by Raman microscope. The randomly polarized Raman spectrum is shown in Figure 1e which is measured under ambient condition with HeNe laser. Apparent peaks at 127.0 and 148.6 cm −1 correspond to the 2 A g and 3 A g phonon modes, respectively. [37] Angle-resolved polarized Raman spectra are carried out in both parallel and perpendicular polarization configurations. Figure 1f,g presents the false-color maps of the Raman spectra. The original spectra are provided in Section SII (Supporting Information). The experimental coordinate x, y, z coincides with the crystal direction a, b, c, respectively. The excitation beam propagates in y direction and the polarization is controlled by a half-wave plate. More details are provided in the Experimental Section and Section SII (Supporting Information). The Raman tensor of A g modes in Ta 2 NiS 5 is given by [38] R ( The anisotropic Raman response can be quantitatively derived as ∝ |c| 2 {(sin 2 + |a| |c| cos ca cos 2 ) 2 + ( |a| |c| sin ca cos 2 ) 2 } (2) a, b, and c are the amplitude of Raman tensor elements. The a , b , and c are the phases of the elements, and ca = c − a . denotes the angle between the polarization vector of incident light e I and the a-axis of the crystal. [39] The angle-dependent phonon intensity can be well fitted by the Raman tensor as shown in Figure 1h-k. In the parallel configuration (e i ∥e s ), I ∥ (A g ) reaches the global maximum along the armchair direction and local maximum along the zigzag direction. Meanwhile, both A g modes present fourfold symmetry in the perpendicular configuration (e i ⊥e s ). The polarized Raman spectra agree with the theoretical prediction and help to identify the crystal direction. Based on our infrared spectroscopy measurement, a direct band gap of 273 meV is extracted for the as grown Ta 2 NiS 5 which agrees with the general consensus of Ta 2 NiS 5 being a narrow gap semiconductor. [36,37,40] More details are given in Section SVI (Supporting Information). To examine optoelectronic properties of multilayer Ta 2 NiS 5 , the as-grown single crystals are exfoliated by mechanical method, and device fabrications are performed by a home-built lithography system with lift-off procedures. Figure 2a exhibits the schematic diagram of the device structure. The multilayer Ta 2 NiS 5 is transferred to the SiO 2 /Si substrate and contacted by electrodes (5 nm Cr/70 nm Au). The photocurrent is measured under the ambient condition with illumination of 632.8 nm laser. Due to the narrow gap nature of Ta 2 NiS 5 [36] , the photoresponse is expected to be insensitive to the wavelength of visible lasers, but the laser beam with lower wavelength is found capable of damaging the sample at moderate intensity. Figure 2b depicts the biasdependent photocurrent with incident power density p = 0.324 mW μm −2 (defined as incident power per unit area). The edge of the spot size is defined by the position with 1.5 standard deviations. The photocurrent I ph is defined as I ph ≡ I illumination − I dark , which describes the difference between current with and without laser illumination. The measured photocurrent presents symmetric and linear bias dependence and goes through the origin of the plot. The photocurrent is extracted as I ph = 5.35 μA under bias voltage of U = 1 V and incident power density of p = 0.324 mW μm −2 . The photoresponsivity reaches a reasonable value of R = I ph P = 2.5 mA W −1 in small-gap semiconductor. [41] The R does not reflect the intrinsic property of the device and material since it varies with the bias. A more proper physical parameter is the photoconductive responsivity R extracted as 3.1 mS W −1 per □. Figure 2c exhibits the dark current which is also symmetric and linear with bias, proving the Ohmic contact of the device as an important prerequisite for high-performance devices. [3] The observed bias dependence suggests the photoconductive origin rather than the photovoltaic mechanism of the measured device. Otherwise, the Schottky barrier or other built-in potential results in the nonlinear response in both I dark − U and I ph − U test. [24,42,43] Meanwhile, the negligible photocurrent at U = 0 V is also against the photovoltaic mechanism. Figure 2d is the image of the device, and the scale bar is 10 μm. The height profile (inset) is measured along the white dashed line, suggesting the thickness of 178 nm of Ta 2 NiS 5 flake. The photoconductive origin is further proved by spatial-resolved experiments as shown in Figure 2e. The photocurrent is measured along the red dash line with U = 1 V and p = 0.324 mW μm −2 . The FWHM of the laser spot is 1.44 μm (Section SI, Figure S1, Supporting Information ), which is much smaller than the size of the sample and ensures the spatial resolution. The blue and green arrows denote edges between the sample and electrodes. It is evident that the photocurrent originates from the sample and vanishes at electrodes which excludes the photothermoelectric effect as well as the Schottky barrier origin. The optoelectronic property of the photoconductive device is further examined by switching, time-dependent, and polarization-dependent experiments. Figure 2f exhibits the onoff repeatability test where the photoresponse remains identical after 2000 cycles. The period of each cycle is about 10 s. We periodically block the laser and continuously measure the photocurrent versus time. The response speed of the device is found beyond the limit of the repeatability test system, so we perform a modulation frequency dependent study to accurately extract the photoresponse time by lock-in technique. The normalized photocurrent at different chopping frequencies is plotted in Figure 2g. The frequency-dependent photoresponse is expected to follow [44] . The best fitting of the experimental results gives a fast photoresponse time of = 31.1 μs. The comparative fast photoresponse suggests finite influence from the dopants. Meanwhile, the Ta 2 NiS 5 crystal is known to be anisotropic, and we studied the photoresponse by illuminating the device with linear-polarized light. The angle-dependent photocurrent is shown in Figure 2h. With the crystal direction verified by angle-resolved Raman spectra, a prominent anisotropic photocurrent is observed with twofold symmetry which maximizes along the armchair direction. The photoconductive origin of the photocurrent is evident by the discussed photocurrent measurements with multiple tuning knobs. The photovoltaic and photothermoelectric mechanisms are firstly ruled out because of the linear bias-dependent and spatial origin. The working frequency of our device also precludes the Dyakonov-Shur mechanism which is usually observed in THz regime. [31,45] The bolometric mechanism behaves similarly in bias-and spatial-resolved experiments, but the response speed of the Ta 2 NiS 5 device is much faster than the general bolometric devices with a typical response time of 1 − 100 ms. [46,47] In addition, the absorption rate of the Ta 2 NiS 5 is found to be independent of the incident light power. Meanwhile, the conductivity of Ta 2 NiS 5 increases linearly with temperature. These facts, combined with the observation of superlinear power dependence, further validate that the bolometric effect does not contribute to the observed photoresponse. More details are given in Section SIV (Supporting Information). In addition to the discussed device performance, an unusual phenomenon is found in the power dependence of the photocurrent. Figure 3a exhibits I ph − U curves under different incident light intensities. The incident laser spot is kept at the center of the sample. The higher incident power is expected to result in larger photocurrent due to the increased photogenerated electron-hole pairs in the Ta 2 NiS 5 . Lower intensity data is not shown because of the overlapping with other low intensity curves. We extract the photocurrent at U = 1 V, as exhibited in Figure 3b. A clear trend of superlinear power dependence is witnessed. As shown in the inset of Figure 3b, the photoconductivity first declines with the light power and then increases slowly. A drastic rising of R is observed at high illumination power, indicating counterintuitive higher optoelectronic efficiency. To better resolve that, we plot the photocurrent in different incident power regimes and perform the power-law fitting in Figure 3c-e following I ph ∝p . With incident power density lower than 0.015 mW μm −2 , a sublinear power dependence of the photocurrent is observed with = 0.53 ± 0.03. The error scale is given by the fitting error. By increasing the illuminating power, the power dependence of the photocurrent experiences a transition from sublinear to superlinear. Within the power regime of 0.015 − 0.2 mW μm −2 , the photocurrent becomes weakly superlinear with = 1.15 ± 0.01. As light intensity further increases, strong superlinear dependence is found at the high incident power regime with the power exponent of = 1.5 ± 0.1. A similar trend can also be found in the linear fit of log-log plot (Section SIII, Supporting Information). To the best knowledge of the authors, such strong superlinearity is unusual for homogeneous 2D materials. As summarized in Figure 3f, the superlinear response of homogeneous 2D devices is generally weak with value lower than 1.1. [11,23,32,48] Our result of = 1.5 represents a giant superlinearity of the photocurrent in Ta 2 NiS 5 device which enables higher optoelectronic efficiency at high incident power. The x-axis is sorted in the order of report time. To explain the superlinear dependence of the photocurrent under high incident power, we provide a two-RC model as illustrated in Figure 4. Different from previous reports [11,23,[49][50][51] where three centers are required, we will discuss later that the two-RC model is more suitable for narrow gap Ta 2 NiS 5 . The VB and CB denote valence band and conduction band, respectively. Besides, there might also exist a few in-gap states. The presence of those in-gap states is also evidenced by our infrared spectroscopy measurements as discussed in detail in Section VI (Supporting Information). Our density functional calculation (DFT) suggests that one of the in-gap states might originate from the S vacancy. The ingap states might also result from other defects such as impurities and dangling bonds [49,[52][53][54] (More details in Section VII, Supporting Information). These in-gap states act as recombination centers of the photogenerated carriers which could significantly reduce the quantum efficiency. Based on our infrared and transport results, the dopants are at least partially ionized (Section VIII, Supporting Information). To account for the superlinearity, those two recombination centers (RC i , i = 1,2) feature distinct parameters. Among them, the most critical two are the density of states (N i ) and the capture cross-sections for electrons (S n i ). S n i describes the ability of RC i to capture the electron. Considering the described system at equilibrium, upon absorbing incident photons, electron-hole pairs are generated across the band gap (process A). The solid and hollow dots denote electrons and holes, respectively. Before those carriers are collected by the electrodes, there is a certain probability (mainly determined by S n i ) for the photogenerated electrons to be captured by RC 1 (process B) or RC 2 (process C). Similarly, RC might also capture the photogenerated holes from valence band (process E and G). Meanwhile, it is possible for the captured electron on RC 2 to be thermally emitted to the conduction band (process D) before recombined with holes, while the captured holes might experience the similar procedure (process F). All those procedures influence the carrier population of the states and in turn vary the probability of each procedure. The probability between different processes varies dramatically with different orders of magnitude. For example, the cross-sections of process B, E, H (labeled in dashed line) are negligibly low due the large energy difference between initial and final states. All processes are considered in the model calculation despite its probability. It is worth to note that process A is the only one originating from the photoelectric transition. All remaining processes denote pure electric processes including thermal ex-citation, trapping, and nonradiative recombination. Other photoelectric transitions and occupation conditions are further discussed in Section V (Supporting Information). Based on the modulation frequency dependent measurement, all those procedures and resultant carrier population reaches equilibrium within hundred μs. Owning to the orders of magnitude higher capture rate for hole, the photocurrent is dominated by carrier concentration of photogenerated electron n in the conduction band. [13,24] Therefore, the photocurrent reads as I ph = nqμES, where q is electronic charge, μ is mobility, E is electric field, and S is cross-sectional area of the channel. [19] Without light illumination, the Fermi level in our model lies near the center of gap. This is further supported by the excitation energy extracted by the transport measurement (Details given in Section SVIII, Supporting Information). The Fermi level may stay close to the gap center but deviate a few meV. As a result, RC 1 is almost filled and RC 2 is nearly empty because of finite thermal excitation. By applying incident light, process C and G are significantly enhanced. Therefore, as reaching the equilibrium shown in Figure 4a, the RC 2 becomes more occupied but most of the states remain empty. Meanwhile, RC 1 is less occupied. With higher incident power as depicted in Figure 4b, the photogenerated carriers lead to the higher occupation rate of RC 2 and lower occupation for RC 1 which now qualitatively varies the system behavior. The occupation condition influences the strength of all the discussed process A-H. The response of the photoconductive device can be analyzed by the proposed model. In the upper panel of Figure 5a, we first consider the conventional case where the properties of RC 1 and RC 2 are similar. Since the photocurrent is determined by electron concentration, we focus on electron-related processes. Both RC 1 and RC 2 provide efficient recombination channels through process B and process C, resulting in the recombination of photogenerated carriers before being collected by electrodes. The recombination rate increases with incident power, which in general case, saturates the photocurrent. Therefore, the photocurrent is expected to be linear or sublinear on the power dependence, as In the left panel (S n 1 ≈ S n 2 , N 1 ≈ N 2 ), both in-gap states work as efficient recombination centers, leading to the sublinear or linear photoresponse. In the middle panel (S n 1 ≪ S n 2 , N 1 ≈ N 2 ), the negligible electron capture cross-section of the lower in-gap state closes the recombination channel on the RC 1 . Combined with the slow saturation of RC 2 at the high power regime, weak superlinear power dependence is achieved. In the right panel (S n 1 ≪ S n 2 , N 1 ≫ N 2 ), the lower density of states of upper in-gap state results in a rapid saturation which effectively closes both recombination channels and potentially leads to prominent superlinear photoresponse. d) The calculated occupancy ratio of RC 2 based on the two-RC model. A higher ratio of N 1 /N 2 leads to a rapid saturation. e) The calculated power dependence of the photocurrent. The photoresponse features more prominent superlinearity with a higher N 1 /N 2 ratio. f) The fitting to the experimental data. The two-RC model fits well with the experimental result. exhibited in the lower panel. As suggested by previous work, [13] different capture cross-sections of the RC potentially lead to weak superlinear dependence of the photocurrent. If the electron capture cross-section of RC 1 (S n 1 ) is much smaller than RC 2 (S n 2 ), the recombination channel on the RC 1 is effectively closed (denoted by red crossings in Figure 5b) and most of the photogenerated electron is trapped by RC 2 . At high incident power regime, the RC 2 becomes densely occupied, which suppresses the process C as denoted by dashed line. Lower recombination rate at higher intensity allows for higher optoelectronic efficiency and leads to the superlinear behavior of the power dependence. To account for the observed giant superlinear photoresponse, another critical parameter N i is taken into consideration. With much lower density of states of RC 2 shown in Figure 5c, the occupation condition qualitatively varies at high incident power. Since the density of states of RC 2 is much lower, higher incident light leads to the rapid saturation of RC 2 which forbids the C process as well as the recombination channel on RC 2 . Combined with negligible S n 1 , now both of the recombination channels are closed at the high power regime. Therefore, a giant superlinear power dependence is presented as shown in the lower panel. To elucidate the effects of N 1 N 2 on superlinear photocurrent, we perform the numerical calculation based on the two-RC model. For each energy level in this model, all related carrier procedures reach equilibrium in the end. For example, the photogenerated electron concentration of the conduction band is given by Adv. Sci. 2023, 10, 2300413 www.advancedsciencenews.com www.advancedscience.com The F, −nvS n 1 (N 1 − n 1 ), −nvS n 2 (N 2 − n 2 ), n 2 P 2 , and −S ′ v ′ np term corresponds to the procedure A, B, C, D, and H, respectively; n, n 1 , n 2 represents the electron density of conduction band, RC 1, and RC 2 , respectively; p represents the hole density of valence band; v denotes thermal velocity of the carriers which is assumed to be equal for simplicity; F denotes the density of electron-hole pairs created by optical excitation per second which is determined by light intensity, quantum efficiency, and absorption rate; S n 1 , S n 2 denotes the electron capture crosssection of RC 1 and RC 2 , respectively; S′ denotes recombination cross-section between free electrons and free holes; P 2 denotes the probability per unit time for the thermal ejection of an electron in RC 2 into the CB. For all other energy levels, similar equations can be derived by fully considering related carrier procedures which ultimately achieve equilibrium. The overall equations are provided in Section SV (Supporting Information). By solving the nonlinear equations, the photoresponse of the device with different parameter settings can be numerically extracted. Figure 5d depicts the occupation proportion of RC 2 . For parameter settings with high N 1 N 2 (green curve), the electron concentration is intensely saturated at high incident power. In contrast, such a saturation feature is weakened for lower values setting of The difference in the electron density has a profound influence on the electron density of conduction band through process C and process D. This is further supported by the calculated photocurrent in Figure 5e. Regardless of the N 1 N 2 setting, all curves exhibit a superlinear feature due to the negligible S n 1 . However, a higher N 1 N 2 leads to a more prominent superlinear power dependence of the photocurrent which agrees with the discussed picture. To quantitively verify the proposed model, we perform the multiparameter fitting as shown in Figure 5f. Due to the nonlinearity of the equation, the fitting is carried out using gradient descent method which reaches convergence within 1 day. The experimental available values, such as the gap size, are fixed. The black dots denote experimental data of the Ta 2 NiS 5 device which is wellfitted by the model (red line). The critical fitting parameters are extracted as N 1 /N 2 = 16.06, S n 1 ∕S n 2 = 0.87 × 10 −4 (more details are provided in Section SV, Supporting Information). It is worth noting that the experimental data can be fitted by both two-RC and three-RC models with qualitative similarity. Thus, the two-RC model is introduced for simplicity which also avoids possible overfitting. Here, the fitted model describes a small gap semiconductor system in agreement with our infrared spectroscopy result and the generally accepted picture. [35,36,55] Further wavelengthdependent research, especially in the mid-infrared regime, might give new insights into both band information and photoresponse. The observed fast photoresponse speed also agrees with the proposed model. The superlinear photocurrent requires the presence of recombination centers. However, the strong superlinearity also requires the dopant density as well as the density of states for the in-gap states to be low. Only with low dopant density, the recombination center can be fully occupied at high illuminating power. Otherwise, superlinearity is not expected to be observed. The fast photoresponse speed is further contributed by the suppression of the recombination process under light illumination. While lowering the power, the response speed of the device drops as discussed in detail in Section SIX (Supporting Information). As a result of superlinearity, the photocurrent is also expected to drop while expanding the laser spot. The deduction has been confirmed in our beam-size-dependent experiments as discussed in detail of Section SX (Supporting Information). To test the Ta 2 NiS 5 photodetector and observed superlinearity for potential applications, the imaging function of the device is evaluated. The fabricated device is transferred to the image plane of a camera and controlled by a translation stage so that it mimics the CCD of the camera after a complete scanning. A screen showing the image of an apple is used as the target with tunable brightness. The photos are exhibited in Figure 6a with 100 ms exposure time where the apple is successfully photographed. With higher brightness of the target, the apple becomes more distinguishable and can be observed in more detail. For more quantitative analysis, root mean square (RMS) contrast of the image is extracted with the definition where M and N are the number of pixels per row and per column, respectively.Ī ph is the average value of the signal. As shown in Figure 6b, the RMS contrasts of the image increase with the maximum detected power among all pixels. Notably, a superlinear trend is found, resembling superlinear photoconductivity. By supporting better imaging contrast, superlinear photodetector could be promising for future optoelectronic detection. The recombination center plays an important role in the superlinearity photoresponse. More sophisticated experimental tools and theoretical calculations might help to identify the origin of the in-gap states and extract their evolution upon light illumination for better understanding of the system. Conclusion In summary, we report the optoelectronic characteristics of the photoconductive detector based on multilayer Ta 2 NiS 5 and discover a giant superlinear power dependence of photocurrent. The time-resolved, frequency-resolved, spatial-resolved, biasresolved, and angle-resolved photocurrent measurements not only present a fast, endurable, and anisotropic photoresponse, but also suggest the photoconductive nature of the device which ensures that the device performance is determined by the material property of Ta 2 NiS 5 . Starting from illumination power density of 1.54 μW μm −2 , photocurrent presents a sublinear photocurrent with the power density. Around 15.4 μW μm −2 , a transition from sublinearity to superlinearity is witnessed. With incident power density higher than 0.2 mW μm −2 , a prominent superlinearity is observed with power exponent = 1.5. The strong superlinearity can be quantitatively explained by a two-RC model. The in-gap recombination centers with distinct density of states and capture cross-sections lead to the rapid saturation of carrier occupancy, thereby, both recombination channels are closed and enable higher optoelectronic efficiency at large incident power. The quantitative fitting between the proposed model and experiments further validates the proposed physical mechanism. The photos taken by the Ta 2 NiS 5 demonstrate enhanced RMS contrast showing potential applications of the superlinear photocurrent. Our work paves the way for the superlinearity of optoelectronic devices and enables better device performance in high-power applications. Experimental Section Crystal Growth and Characteristic: Ta 2 NiS 5 single crystals were prepared by standard chemical vapor transport method. Stoichiometric mixture of Ta, Ni, and S powder was sealed in a 20 cm vacuum quartz tube with iodine as transport agent. The tube was loaded in a two-zone furnace kept at 950°C and 830°C. After a 5-day growth procedure, shiny and needle-like single crystals were found in the low-temperature zone. XRD was tested by Bruker D8 Discover. Raman spectrum was tested by a home-built system using 632.8 nm laser. Device Fabrication: Multilayer Ta 2 NiS 5 flakes were mechanically exfoliated from bulk crystals and then transferred to a Si/SiO 2 wafer. Devices were fabricated by self-made lithography system with lift-off procedures. Cr/Au (5 nm/70 nm) is deposited as electrodes and Ohmic contact has been proved by the I − U measurement. Photocurrent Measurement: Photodetectors were excited by 632.8 nm laser through a 50×, NA = 0.8 objective. The FWHM of the focal spot was 1.44 μm. Bias-, angle-, spatial-, time-and power-dependent photocurrent were measured by Keithley 2450 using two-terminal method. Spatial photocurrent scanning was carried out by an additional piezo-actuated stage. The photoswitching test was performed with a periodically switched shutter. The modulation frequency-dependent measurement was carried out by Stanford Research SR-860 lock-in amplifier with a chopper. Supporting Information Supporting Information is available from the Wiley Online Library or from the author.
7,443.8
2022-08-28T00:00:00.000
[ "Physics" ]
Dual‐wield NTPases: A novel protein family mined from AlphaFold DB Abstract AlphaFold protein structure database (AlphaFold DB) archives a vast number of predicted models. We conducted systematic data mining against AlphaFold DB and discovered an uncharacterized P‐loop NTPase family. The structure of the protein family was surprisingly novel, showing an atypical topology for P‐loop NTPases, noticeable twofold symmetry, and two pairs of independent putative active sites. Our findings show that structural data mining is a powerful approach to identifying undiscovered protein families. | INTRODUCTION Characterizing protein structures is essential for understanding the molecular basis of their function, and structures are typically solved by experimental approaches and deposited in the Protein Data Bank (PDB) (Burley et al., 2022).When the solved protein adopts a novel structure that appeared at the first time, the finding is usually reported by the researchers who determined it.However, more recently, public databases produced by state-of-the-art structure prediction, such as the AlphaFold protein structure database (AlphaFold DB) and ESM metagenomic Atlas (ESM Atlas), are changing this situation (Lin et al., 2023;Varadi et al., 2024).These databases are approximately three orders of magnitude larger than the PDB and contain numerous experimentally unsolved protein structures.Structural models never seen by human beings must be hiddenly archived there since the models were generated automatically by artificial intelligences and deposited without any human curations, providing opportunities for finding novel proteins based only on the structural information in silico. Dedicated data mining demands a clearly stated working hypothesis.While several groups have pursued intensive model classifications against AlphaFold DB (Barrio-Hernandez et al., 2023;Bordin et al., 2023;Durairaj et al., 2023), this bird's-eye approach could miss unique and intriguing proteins.To find these hidden gems, we defined a very specific database search question: are there monomeric proteins that contain multiple phosphate-binding loops (P-loops) on a single continuous β-sheet?The P-loop or Walker-A motif is a local functional motif that recognizes phosphate groups and shared among P-loop NTPases, such as ATPases, GTPases, and nucleotide kinases (NKs) (Leipe et al., 2002;Leipe et al., 2003;Saraste et al., 1990;Walker et al., 1982).In general, one P-loop resides on a single continuous β-sheet of a three-layered α/β/α sandwich architecture.Our preliminary search against the PDB supported this observation because no structure has multiple P-loops in a β-sheet.However, the possibility that a single β-sheet possesses multiple P-loops should not be excluded.We hypothesized that such experimentally unobserved multiple-P-loop structures exist in AlphaFold DB and can be discovered via systematic data mining. | RESULTS By computationally scanning more than 214 million entries in AlphaFold DB version 4 (Kim et al., 2023;Varadi et al., 2024), we extracted 15,977 single-chained structures possessing multiple P-loops.We then analyzed the hydrogen-bond network and extracted 839 structures with multiple P-loops on a single continuous β-sheet (Frishman & Argos, 1995).The structures were grouped into 11 clusters based on structural similarity (Van Kempen et al., 2023).As a result, we found an uncharacterized family of P-loop proteins, dual-wield P-loop NTPase (dwNTPase), as the largest cluster with 711 members.All structural models in this cluster were predicted with high confidence scores, that is, the average predicted Local Distance Difference Test was 94.27, indicating that the predictions were reliable (Figure S1) (Jumper et al., 2021). The overall architecture of dwNTPases was novel and showed noticeable two-fold symmetry.Figure 1a shows the structure of a representative dwNTPase from Bacillus thuringiensis (Bt.UniProt accession no.A0A1Y0TWD8).Two P-loop domains are tightly packed and surrounded by two long bridging α-helices and two framing α-helices.The two bridging α-helices cover the top side of two α/β P-loop domains and form a coiled-coil packing around residues 124-155 and 305-336 (Kumar & Woolfson, 2021).The C-terminal α-helices are packed to the N-terminal domain forming very long-range contacts, which means the symmetry in dwNTPase architecture does not result from tandem repeats of identical domains but involves more complicated exchange of secondary structural elements (SSEs; α-helices and β-strands) between them (see Section 3).Each of the domains comprises six β-strands to form two β-sheets.Since these two six-stranded β-sheets are connected by two hydrogen bonds between the C-terminal end of strand 0 and its symmetrical counterpart, these parts form a continuous 12-sheeted β-sheet and reveal a previously unobserved dual-P-loop architecture (Figure 1b).Although the hydrogen bonds between two six-stranded β-sheets allowed us to identify dwNTPases structures during data mining, the interactions between them are so weak that the large β-sheet may dissociate under realistic conformational fluctuations.Two canonical P-loops independently form two putative ligand binding sites that penetrate through the molecule and resemble tunnels rather than pockets (Figure 1c).Two β-hairpins from each domain form a pier-like structure that looks like a planar "wall" between these two tunnels, but the β-hairpins, which we call piersheets, do not form a single four-stranded β-sheet as they have no hydrogen bonds between them.A search against the PDB clarified that no similar structures have been reported (Minami et al., 2018;Van Kempen et al., 2023).Similarly, a SwissProt subset of AlphaFold DB contained no similar structures (Minami et al., 2018;Van Kempen et al., 2023), indicating that the dwNTPase family has no reliable annotations manually verified by UniProt curators. We found that the P-loop domain of dwNTPases was structurally atypical for a P-loop NTPase by searching against the PDB (Figure 2a) (Minami et al., 2018;Van Kempen et al., 2023).A crystal structure of mutual gliding-motility protein MglAa from Myxococcus xanthus (PDB ID: 6h35), a bacterial small and monomeric GTPase, was the only known P-loop NTPase that showed relevant structural similarity to the dwNTPase P-loop domain (Galicia et al., 2019).The P-loop domain of dwNTPase has an additional β-strand at the N-terminus (strand 0) compared to the MglAa structure (Figure 2b).Two strands constituting the pier sheet and a successive α-helix are also appended.In contrast, the domain lacks two C-terminal β-strands (strands 6 and 7) and some other surrounding SSEs.These unique arrangements of SSEs give rise to the atypical topology that does not resemble other P-loop NTPases (Figures S2 and S3) (Chandonia et al., 2022;Minami et al., 2018).Furthermore, the P-loop domain has a long loop rather than a helix conserved in other P-loop NTPases (Figure S4), which we named the switch loop (Figure 2a).These atypical features of the P-loop domain make it difficult to assign dwNTPase to known classes of P-loop NTPases. Despite these novel features of dwNTPase, an iterative structure search by Foldseek against the entire AlphaFold DB revealed that 2219 similar structures were deposited, most of which originated from bacteria in various Firmicutes (Table 1 and Table S1) (Van Kempen et al., 2023;Varadi et al., 2024).Similar searches against ESM Atlas culled by 30% sequence identity found 748 similar structures (Table S2) (Lin et al., 2023).We classified dwNTPase structures into six subclasses based on the conservation of motifs and domains (Figure S5).The bona fide dwNTPase structure with two P-loops intact (class 1) was the most abundant, suggesting functional constraints exist to conserve the two active P-loops.A BLAST search against the nonredundant database revealed that dwNTPase had been classified as the PRK06851 family protein in the NCBI conserved domain database (McGinnis & Madden, 2004;Wang et al., 2023).Thus, we concluded that dwNTPases constitute a conserved protein family among bacteria. | DISCUSSION The molecular functions of dwNTPases were investigated by analyzing conserved residues.Although the sequence identities between both halves of dwNTPase structures are generally low (median; 23.1%), the most symmetric class of dwNTPases (class 1) possesses two clusters of conserved residues shared between both halves (Figure 3a and Figure S6).We found Cys66/Cys248 (residue numbers follow Bt.dwNTPase), Asp74/Asp256, Asp87/Asp269, and His92/His274 formed putative metal binding sites.Molecular dynamics (MD) simulations of the Bt.dwNTPase structure complexed with two ATPs, two Mg 2+ ions, and two Zn 2+ ions showed that the Zn 2+ ions were stably coordinated by two aspartates and the γ-phosphate group of ATPs (Figure 3b) (Abraham et al., 2015;Huang et al., 2017), which resembles the active site structure of metal-dependent nucleotidyltransfer enzymes (Figure 3c) (Yang, 2008).The side chains of Cys66/Cys248 and His92/His274 remained unoccupied (Figure 3d), suggesting that they may have roles other than metal-binding.As the pair of cysteine and histidine residues are reminiscent of the catalytic triad/dyad in cysteine proteases (Figure 3e), we hypothesize that dwNTPases have additional hydrolase/ligase activity (Dodson & Wlodawer, 1998). In addition to these conserved residues, we identified other regions characteristic of dwNTPases.First, each P-loop domain has conserved lysine residues (Lys36/ Lys218) that precede the P-loops and interact with two switch loops.Because the switch loop partially conceals the ligand binding tunnels (Figure S7a) and is highly flexible in MD simulations (Figure S7b), the conserved lysine residues may play sensor-like roles to trigger NTPase activity, depending on the binding of other ligands to the tunnels.Additionally, the P-loops are surrounded by several charged or polar residues that support the recognition of NTPs and Mg 2+ ions (Figure S7c) and are not conserved in known P-loop NTPases (Leipe et al., 2002;Leipe et al., 2003). Two previous reports on gene knockout experiments suggest that dwNTPase (Cd630_32980 or CD3298) plays a role in accumulation of dipicolinic acid into spores of Clostridioides difficile (Kochan et al., 2017;Ribis et al., 2023).This is consistent with the fact that dwNTPases are distributed among various Firmicutes, especially among Bacilli and Clostridia (Table 1), which are known for spore-formation.However, the detailed biological roles and molecular mechanisms of dwNTPases remain elusive because their structures show limited homology to NTPases with known functions.In other words, this indicates that dwNTPases are responsible for unique molecular mechanism to function.The twofold symmetry implies that the interaction partner of dwNTPases also possesses twofold symmetry, such as double-stranded DNA, or that the cleft between two P-loop domains recognizes ligand molecules in a similar manner to periplasmic hemebinding proteins (Figure S8) (Mattle et al., 2010).When focusing on the regions around this cleft, one of two hydrogen bonds that connects two P-loop domains' β-sheets, N atom of residue 9 to O atom of residue 191, was broken in 19 final snapshots out of 20 MD trajectories.By contrast, another one, N atom of residue 191 to O atom of residue 9, was intact in 18 final snapshots.These observations reinforce our initial guess that interactions between two β-sheets are weak under thermal fluctuations and also suggest possible functional asymmetry of two P-loop domains.Asymmetry was also found in the amino-acid composition of individual halves; the left half (residues 1-139 and 321-369) of the structure in Figure 1a is more positively charged than the right half (140-320), indicating that each half plays different functional roles (Figure 1c and Figure S9). The evolutionary origin of dwNTPases is unknown.Although it is plausible that dwNTPases gained twofold symmetry via gene duplication, domain swapping, and gene fusion (Figure S10) (Hadjithomas & Moudrianakis, 2011;Toledo-Patiño et al., 2019), the origin of the unique topology of individual P-loop domains remains unclear.Detailed phylogenetic analysis may explain the evolution of P-loop NTPases, including dwNTPases (Leipe et al., 2002;Leipe et al., 2003).Structural and biochemical studies are required and should provide greater insight into the biological significance of the dwNTPase family. | CONCLUSIONS In summary, we demonstrated that structural data mining based on specific working hypothesis can discover uncharacterized protein families, for example, dwNTPase, and is a powerful approach to exploring dark proteomes (Perdigão et al., 2015;Taylor et al., 2009), the unwatched region of the protein universe, which will help and encourage the design of experimental studies. | Identification of structures containing multiple P-loop-like fragments AlphaFold DB (v4 UniProt) was downloaded from the Foldcomp database (Kim et al., 2023;Varadi et al., 2024).We used foldcomp version 0.0.2 installed via pip.P-loop NTPase protein structures were extracted by converting the models into the sequences of ABEGO using a custom Python script, where A, B, E, and G, respectively, denote backbone dihedral angles (phi, psi) for α, β, left-handed β, and left-handed α on the Ramachandran plot (Wintjens et al., 1996).O denotes other conformations unassignable on the Ramachandran plot, typically a cis-peptide conformation.Typical P-loop (Walker-A) motifs have conformations represented by EBBGAG or BBBGAG, both of which can be seen in the crystal structure of α and β subunits of bovine mitochondrial F1-ATPase (chain A and chain D of PDB ID: 1bmf) (Abrahams et al., 1994).Because the P-loop is a junction between a β-strand and an α-helix, we extended the ABEGO motifs to "BBBEBBGAGAAAAA" or "BBBBBBGAGAAAAA" and extracted all the structures containing any of them by sequence pattern matching.We then calculated the Cα root-mean-square deviations (RMSDs) of the matched substructures against the reference P-loop fragment (residues 166-179 of 1bmf, chain A) using pair_fit command in PyMOL 2.5.0 and filtered out substructures with Cα RMSDs larger than 2.0 Å.We obtained 15,977 proteins containing multiple P-looplike fragments and built a custom Foldcomp database for subsequent procedures using tar2db command from MMseqs2 (version 96b2009982ce686e0b78e226c75c 59fd286ba450) (Kim et al., 2023;Steinegger & Söding, 2017). | Identification of dual-wield NTPases Visual inspection revealed that most structures with multiple P-loop-like fragments within a single chain were tandem repeats of known P-loop NTPase domains connected by flexible linkers.Such proteins were excluded by analyzing structures using STRIDE2TOP (version 1.0) that enumerates β-sheets in a protein structure based on the hydrogen-bond definition given by STRIDE and reports the list of β-strands in each of the β-sheets.Assigning two nearest β-strands flanking the P-loop-like fragment to the β-strands in the list, we obtained 839 structures possessing two P-loop-like fragments on a single β-sheet.These structures were clustered by TM-score calculations (≧0.5) with Foldseek (version 5285cd11c335e1a0133ffd3e32f55ad6ff82f3cb) into 11 clusters (Van Kempen et al., 2023).The largest cluster contained 711 members, which corresponded to dual-wield NTPases.For these structures, we performed allagainst-all structure alignment using MICAN (version 2019.11.27) and defined the structure with the largest average TM-score as the representative (AF-A0A1Y0TWD8-F1-model_v4) (Minami et al., 2018). | Extraction of structures similar to dwNTPase from AlphaFold DB and ESM Atlas We performed iterative structure searches using Foldseek (version 9b92c127ac27a546a0c31f19ea4f48339e790ca0) to enumerate as many structures that resemble dwNTPase as possible (Van Kempen et al., 2023).In the first stage, we performed a structure search against AlphaFold DB using all 711 structures initially mined from AlphaFold DB as queries.After removing overlapping structures, we obtained 1377 structures.Using these structures as seeds, we again performed a Foldseek search and obtained 135 new nonoverlapping structures.The third iteration of Foldseek search yielded some nonspecific hits.Therefore, we stopped this iteration, manually selected similar structures, and discarded the remaining structures.Consequently, we obtained 2219 dwNTPase structures from AlphaFold DB.When using Foldseek's internal functionality to perform iterative search with six times of iteration, enabled by the option --num-iterations 6, we only obtained 2115 structures that constitute a strict subset of these 2219 structures.Similarly, we performed structural searches against the highquality_clust30 subset of ESMatlas using 711 structures found in AlphaFold DB as queries and obtained 748 structures with a TM-score larger than 0.5 (Lin et al., 2023;Van Kempen et al., 2023;Xu & Zhang, 2010). | Whole structure search against the PDB and Swiss-Prot subset of AlphaFold DB To assess the novelty of the dwNTPase structure and gain insights into the function, we performed structural searches against PDB100 and the Swiss-Prot subset of AlphaFold DB (version 4) using the Foldseek server in the TM-align mode and the representative structure as the query (Burley et al., 2022;Van Kempen et al., 2023;Varadi et al., 2024).No relevant (TM-score ≧ 0.5) hit was found among these databases.We used MICAN to perform rigorous one-against-all searches without pre-filtering; however, no similar (TM-score ≧ 0.5) structures were found among the PDB (2023-09-Jan) and the Swiss-Prot subset of AlphaFold DB (version 2) (Minami et al., 2018). 5.5 | Domain structure search against the PDB, Swiss-Prot subset of AlphaFold DB, and SCOPe We searched structures similar to the P-loop domain of the representative structure (residues 1-110) against the PDB100 and the Swiss-Prot subset of AlphaFold DB by using the Foldseek server (Van Kempen et al., 2023).No relevant hit was found.We used MICAN to perform a rigorous structure search without pre-filtering against the PDB (2023-09-Jan) and the Swiss-Prot subset of AlphaFold DB (version 2) (Minami et al., 2018).We obtained 358 and 2931 relevant hits (TM-score ≧ 0.5) from the PDB and Swiss-Prot.We performed clustering by MMseqs2 with sequence identity set at 35% and obtained 15 and 137 clusters (Steinegger & Söding, 2017).The alignments were checked by visual inspection of all cluster representatives.We found that some structures showed similar topology to the P-loop domain of dwNTPase: 6h35, Q1DB04, and Q9UBK7 from the PDB and Swiss-Prot, which are annotated as GTPase or GTP-binding proteins (Galicia et al., 2019).The remaining hits showed RecA-like topology and were not topologically identical to dwNTPase because the RecA-like topology has an all-parallel β-sheet, whereas dwNTPases have anti-parallel-containing β-sheets.Similarly, we performed structural comparisons against domain structures classified as G-proteins (SCOP concise classification string: c.37.8), NKs (c.37.1),and RecAlike proteins (c.37.11) in the SCOPe version 2.08 using MICAN (Chandonia et al., 2022;Minami et al., 2018).The groups of G-proteins, NKs, and RecA-like proteins contained 255, 212, and 118 parsed domain structures, respectively, and we selected the structures that showed the highest TM-score in the group for visualization (Figure S2).Note that when we added residues 341-369 (an α-helix) of the representative structure to its residues 1-110 as the p-loop domain, we obtained no similar structure in any structural databases. | Calculation of sequence identities between two halves of dwNTPase structures We selected 1903 structures with more than 340 residues from the set of dwNTPases extracted from AlphaFold DB.A structure was self-aligned by MICAN in the rewiring mode, which ignores the sequential order of SSEs (Minami et al., 2018).The sequence identity was calculated based on the second-best alignment by MICAN. 5.7 | Identification of putative catalytic residues (conserved residues) and a side chain pattern search against the PDB The potential function of dwNTPases was examined by performing a sequence search and alignment to identify conserved residues by HHblits (version 3.3.0)against UniRef30_2022_02 (Remmert et al., 2012;Suzek et al., 2015).After three iterations, 2687 sequences were extracted from the database.To exclude fragmented sequences most likely originating from partial matches to the P-loop consensus motif, we removed aligned sequences with more than 10 gaps against the representative sequence and obtained a Multiple Sequence Alignment (MSA) with 138 sequences.From this MSA, the site-wise entropy of the alignments was calculated to identify conserved residues, and the top 10 residues around the two tunnels were listed.We defined tunnel 1 as residues 61-100 and tunnel 2 as residues 243-282.From tunnel 1, residues 62,66,67,69,73,74,75,87,88,and 100 were identified. From tunnel 2,residues 244,246,247,248,252,255,256,261,263,and 274 were identified.According to the orientation of side chains toward the tunnel, we selected Cys66, Ser67, Asp74, and Asp87 as candidates for probable functional residues in tunnel 1.Similarly, Cys248, Asp256, and His274 were selected for tunnel 2. Considering the symmetry of the dwNTPase structure, Cys66/Cys248, Asp74/Asp256, Asp87/Asp269, and His92/His274 were considered clusters of functional residues in tunnels 1 and 2. We performed a side-chain pattern search against the PDB using the strucmotif-search program (version 0.18.1) to determine whether protein structures possessed similar side-chain configurations (Bittrich et al., 2020).The set of residues Cys66, Asp74, Asp87, and His92 in the representative structure was selected as queries, and a search was performed against all structures in the PDB (2022-28-12), with the threshold for the structure similarity set to 1.0 Å.The side chain pattern search gave no hits and indicated that the putative catalytic residues have a novel configuration of conserved residues. | Docking of ATP, Mg, and Zn We transplanted ligand structures from existing PDB structures to model the complex structures.The P-loop region of an ATPase crystal structure (PDB ID: 6j18) was superposed to the P-loop of the representative structure by MICAN in PyMOL, and the ATP and Mg 2+ models were extracted (Minami et al., 2018;Schrodinger, 2015;Wang et al., 2020).Similarly, His125 from a zinc finger motif (PDB ID: 2hgh) was superposed to His92 and His274, and the coordinating Zn 2+ ions were extracted (Lee et al., 2006).The extracted ligand molecules were merged with the representative structure. | MD simulations MD simulations were performed by Gromacs version 2022.04 with the charmm36 force field (Abraham et al., 2015;Huang et al., 2017).The size of simulation boxes was determined by the molecule size with margins of 13 Å.After in vacuo energy minimization to remove steric clashes, the protein-ligand complex was solvated by the TIP3P water model with 0.1M NaCl, and the system was neutralized by adding additional Na + or Cl À ions, depending on the total charge of the protein and ligands.The energy was minimized by the steepest descent and equilibrated by 100 ps NVT and NPT simulations with harmonic restraints on the nonhydrogen atoms.The temperature and pressure of the system were controlled to 300K and 1 bar by the V-rescale thermostat and Parrinello-Rahman barostat.Electrostatic interactions were computed by the particle mesh Ewald method, and bonds involving hydrogen atoms were constrained by the LINCS algorithm.For each docked model, we performed 20 trajectories of 100 ns simulations with a 2-fs time step. F I G U R E 1 AlphaFold2 predicted model of a dual-wield NTPase structure (AF-A0A1Y0TWD8-F1-model_v4).(a) Overall dwNTPase structure colored according to a purplewhite-orange gradient from the N-to C-terminus.(b) Topology diagram of dwNTPase.Blue and red arrows represent β-strands pointing up and down that form the large β-sheets in the P-loop domains.Green arrows represent the two pier sheets.White rectangles are α-helices.Gray and black lines indicate junctions projecting behind and out of the β-sheets, respectively.Blue dotted lines represent hydrogen bonds connecting the two halves of the large β-sheet.(c) The location and shape of the ligand binding tunnels.Color bar is at the bottom. F I G U R E 2 P-loop domain.(a) Front and top views of the dwNTPase P-loop domain colored according to a purplewhite-orange gradient from the N-to C-terminus.P-loop, switch loop, and pier-sheet are indicated by labels.(b) Topology diagrams and cartoon representations of dwNTPase P-loop domains and MglAa structure.Arrows and rectangles represent β-strands and α-helices.Secondary structures that align between two structures are colored blue.T A B L E 1 Phylogenetic classification of dwNTPases.We performed structural alignment of all 2219 structures against the representative dwNTPase structure.To ensure fragmented structures were excluded, 1843 structures showing TM-scores >0.85 were selected.Entries with no phylogenetic information available in UniProt were ignored.The structures (1727 in total) were classified by their species.Others include environmental samples, metagenomes, unclassified bacteria, and Firmicutes from environmental samples. F I G U R E 3 Putative functionally relevant residues.(a) Conserved residues in the putative ligand binding tunnels.His, Cys, and Asp are colored blue, orange, and red, respectively.P-loops and their conserved residues are colored cyan and gray.(b) Coordination of metal ions by two aspartate side chains observed in MD simulations.ATP is shown in orange stick representation.Side chains of relevant residues are shown as sticks and CPK coloring.Green and gray spheres represent Mg 2+ and Zn 2+ ions, respectively.The P-loop is colored cyan.(c) The active site structure of RNase H (PDB ID: 1zbl).The side chain of metal coordinating amino acid residues Asp and Asn are shown as sticks and CPK coloring, where Asn is a mutation from Asp. Mg 2+ ions are shown as spheres.The Mg 2+ ion coordinating with the side chain of Asp and Asn is colored green.Nucleic acid residues that contact the Mg 2+ ion are shown in orange.(d) The catalytic triadlike side chain configuration observed during the MD simulations.The triad-like side chain cluster is circled.The black dotted line indicates the hydrogen bond between the side chains of His92 and Asp74, which HBPLUS detected.e, The active site structure of TEV protease (PDB ID: 1lvm).Side chains of the Cys-His-Asp catalytic triad are shown as sticks, CPK coloring, and circled.MD, molecular dynamics.
5,406.8
2023-05-17T00:00:00.000
[ "Biology", "Computer Science" ]
Association of Age and Gender distribution in Patients undergoing Onlay Restoration Sahil Choudhari1, Subash Sharma*2, Jaiganesh Ramamurthy3 1Saveetha Dental College and Hospitals, Saveetha Institute of Medical and Technical Sciences (SIMATS), Saveetha University, Chennai, Tamil Nadu, India 2Department of Conservative Dentistry and Endodontics, Saveetha Dental College and Hospitals, Saveetha Institute of Medical and Technical Sciences (SIMATS), Saveetha University, Chennai, Tamil Nadu, India 3Department of Periodontics, Saveetha Dental College and Hospitals, Saveetha Institute of Medical and Technical Sciences (SIMATS), Saveetha University, Chennai, Tamil Nadu, India INTRODUCTION The most common cause of enamel loss in a clinical situation is dental caries (Rajendran, 2019). Although a worldwide signi icant reduction in the prevalence of dental caries has been seen, untreated carious lesions are highly prevalent in permanent teeth, affecting about 35 percent of the world's population, particularly in the posterior teeth. Bacteria play the most signi icant role in the initiation and progression of diseases involving the pulp and periapical regions (Manohar and Sharma, 2018). MMPs are produced by odontoblasts and they have a wide role in dental caries and periapical in lammation (Ramesh et al., 2018). While caries are the predominant cause of tooth structure loss, many other non-carious lesions, such as attrition, abfraction, erosion and fracture, can also lead to a breakdown of the hard tissues, thus requiring their restoration. The consequences of these lesions are sensitivity and high wear (Hussainy, 2018). One of the most commonly seen injuries involving the teeth and its supporting structures is dental trauma (Jose et al., 2020). If a patient only reports with chipped teeth or localized defects, veneers are usually the material of choice due to a conservative and esthetic approach (Ravinthar and Jayalakshmi, 2018). Obliteration of the pulp canal usually occurs after serious teeth injuries (Kumar and Antony, 2018). Avulsion of permanent teeth causes signi icant damage to the supporting tissues and vascular and nerve structures (Rajakeerthi and Nivedhitha, 2019). Other causes leading to pulpal involvement include dental erosion which is caused by acid attacks (Nasim and Nandakumar, 2018). There are multiple ways to restore posterior teeth that include direct materials like amalgam and composite and indirect materials like ceramic and metal. The clinician's selection of a particular material and technique for the restoration of posterior teeth may be affected by personal choice and skills, patient demands and inance among others (Laegreid et al., 2014). Prior to planning for a restorative treatment, diagnosing the pulp status is very important. Diagnosing the exact pulpal status by direct examination is uncertain due to the fact that the pulp is enclosed within a hard tissue. In order to identify the actual pulp status, a surrogate test must be performed (Janani et al., 2020). The quality of dental restorations has been signi icantly in luenced by changes in treatment practices, the implementation of improved restorative materials and techniques, successful preventive programs, enhanced dental care and increasing interest in caries-free teeth. There are two types of restorations that can usually be used to restore a tooth which includes (a) direct and (b) indirect (Qualtrough et al., 2009). Every type of restoration has its own advantages and disadvantages as well as indications and contraindications. Complete coverage restorations are commonly used in daily clinical practice, particularly when the loss of tooth structure exceeds 50 per cent. Gold, metal ceramics, allceramic and zirconia crowns have been successfully used and all re lect different choices for restorative materials (Sailer, 2009). Choosing an alloy for a cast metal restoration in the 1950s was choosing a high gold alloy ADA speci ication, which were composed of gold and platinum greater than 75 per cent (Siddique, 2019; Teja and Ramesh, 2019). The type I alloy (soft) contained most noble metals (83%) and the types II, III and IV (hard) were composed of an increasing quantity of silver and copper. The most commonly used base alloys are Ni-Cr and Ni-Cr-Be. Beryllium improves the physical properties of the alloy. (Pierce and Goodkind, 1989). Dental ceramics also called porcelains, composed of a composite structure with crystalline phase or phases within a matrix made up of glass (Ramanathan and Solete, 2015). There are various porcelain systems available, and research is continuing to develop stronger, highly aesthetic and multi-purpose materials, for crowns, bridges, inlays and onlays. Previously our team had conducted numerous clinical studies, in-vitro studies, randomized controlled trials, (Ramamoorthi et al., 2015) and reviews (Noor and Others, 2016) in the last 5 years. Now, we are focussing on epidemiological surveys. The idea for this study stemmed from the current interest in our society. The aim of the study was to ind out the association of age, gender and tooth number in patients undergoing onlay restoration. MATERIALS AND METHODS The study setting for this study was a university setting. 86000 patient records at a private dental college were reviewed between June 2019 to March 2020. Our study included all the people who had undergone onlay treatment. A total of 49 onlay procedures were done. Cross veri ication of data was done using photographs and RVGs. Data was reviewed by an external reviewer. To minimize sampling bias, all the available data was included in the study. Data collected included name, age, gender, tooth number and material used for onlay restoration. The collected data was tabulated using Microsoft Excel and analyzed using SPSS. Differential (frequency distribution) and inferential (chi-square test) statistics were done. RESULTS AND DISCUSSION In our study, we observed that the age group below 30 years (p>0.05) (Figure 1) reported the most for onlay treatment with a higher incidence of males (p>0.05) (Figure 2). A maximum number of onlay treated tooth was 46. (p>0.05) Metal ceramic was the most common type of material used for onlay fabrication. (p>0.05). Figure 1 depicts, Chi-square test was done and the association between age and material used for onlay was found to be statistically not signi icant. Pearson's Chi-square value = 3.320, df = 4, p-value 0.506 (>0.05) hence statistically not signi icant. Figure 2 depicts, Chi-square test was done and the association between gender and material used for onlay was found to be statistically not signi icant. Pearson's Chi-square value = 27.796, df = 2, p-value 0.247 (>0.05) hence statistically not signi icant. Our study highlighted the association of age, gender and tooth most commonly involved in onlay restoration. Among the 49 onlay restoration evaluated, a maximum number of onlay restorations were done in the age group below 30 years of age. (57%) Only 2% of the onlay restorations belonged to the age group above 60 years and 41% of the onlay restorations belonged to the age group between 30-60 years. In terms of gender, 57% were males and 43% were females. Most common tooth involved for onlay restoration was 46 (29%) followed by 26. (12%) The material of choice for onlay restoration was almost equal between metal (33%), metalceramic (36%) and all-ceramic (31%). Indirect ceramic restorations can be designed in a single sitting either in a laboratory by a dental technician or by using CAD / CAM systems. Longevity studies show 0% to 7.5% annual failure rate (AFR) for ceramic inlays/onlays, and between 0.8% to 4.8% AFR for chairside fabricated restorations. Indirect ceramic restorations demonstrated comparable or slightly better clinical performance than direct composite restorations, especially given that indirect restorations are usually larger (Wittneben, 2009). Studies have found a gender in luence on the survival of restorations. Men may have stronger biting forces than women, which may contribute to higher failure rates due to bonding interfaces or fatigue of the material resulting in fracture and debonding, and increased failure rate. As highlighted by Schulz and others (Schulz et al., 2003) the combination of various patient factors, like loading which is unfavourable, and an inadequate material dimension could have contributed to a higher rate of failure in men as observed in their study. Parafunctional habits can overcome the gender in luence, and therefore gender should not be an outlying variable when evaluating the survival of the restoration. Furthermore, women regularly attend dental services as they are more concerned about their health. In a study done by (Olsson, 2019) women were more likely to choose an indirect restoration compared to men. This is in contradiction to our study and a previously reported gender-equal distribution in the utilization of dental care. (Sondell et al., 2003) The higher mean age for people who choose an indirect coronal restoration may be linked to the differences in the dental status between various age groups. Older patients have an increased number of missing teeth as well as illed teeth with multiple missing or illed surfaces. (Boslaugh, 2007) In general, older people may thus be more likely to need a crown compared to younger individuals with a higher number of remaining tooth substance which was in contradiction to our study in which the age group below 30 years reported the maximum for onlay restoration. The limitations of our study were that it was an institutional-based study, the duration of cases taken into account was only 1 year and a very small sample size. Future scope includes taking a larger population into account and populations from different geographical locations. CONCLUSIONS The age group below 30 years (p>0.05) reported the most for onlay treatment with a higher incidence of males (p>0.05). A maximum number of onlay treated teeth was 46. (p>0.05) Metal ceramic was the most common type of material used for onlay fabrication. (p>0.05) Within the limitations of the study, no signi icant difference was found between age, gender and type of material used for onlay fabrication. Funding Support The authors declare they have no funding support for this study.
2,237.2
2020-09-12T00:00:00.000
[ "Medicine", "Materials Science" ]
Enhanced Pulse Compression within Sign-Alternating Dispersion Waveguides : We show theoretically and numerically how to optimize sign-alternating dispersion waveguides for maximum nonlinear pulse compression, while leveraging the substantial increase in bandwidth-to-input peak power advantage of these structures. We find that the spectral phase can converge to a parabolic profile independent of uncompensated higher-order dispersion. The combination of an easy to compress phase spectrum, with low input power requirements, then makes sign-alternating dispersion a scheme for high-quality nonlinear pulse compression that does not require high powered lasers, which is beneficial for instance in integrated photonic circuits. We also show a new nonlinear compression regime and soliton shaping dynamic only seen in sign-alternating dispersion waveguides. Through an example SiN-based integrated waveguide, we show that the dynamic enables the attainment of compression to two optical cycles at a pulse energy of 100 pJ which surpasses the compression achieved using similar parameters for a current state-of-the-art SiN system. to the temporal and spectral phase dynamics of the pulse if nonlinear effects were not considered. We show the GVD of this fiber in Figure 2, plotted along the envelope angular frequency range of interest. The carrier angular frequency (193.4 THz) corresponds to the zero of the envelope frequencies in this example, corresponding to a pulse input wavelength of 1.55 µ m. Typical loss, including entrance splice loss, for this fiber is 0.03 dB across the used length. Introduction Supercontinuum generation (SCG) in Kerr nonlinear waveguides is central in numerous applications such as in the generation of sub-cycle pulses [1][2][3][4][5][6], metrology using optical frequency combs [7][8][9][10][11], optical coherence tomography [12][13][14] and as a wide-bandwidth source for ranging and sensing applications [15,16]. We have recently introduced the concept of repeatedly sign-alternating dispersion along the propagation direction as a means of overcoming stagnation of spectral bandwidth growth across SCG waveguides [17]. Sign-alternating the dispersion maintains spectral bandwidth generation (i.e., an increase in the pulses' 1/e bandwidth) in normal dispersion segments (ND) by countering the bandwidth stagnation that occurs by the loss of peak power and duration increase. The stagnation is overcome by temporally compressing the pulse in anomalous dispersion segments (AD). Spectral generation is also kept ongoing in AD segments by the nonlinear temporal compression of the chirped pulse input from the previous ND segments, and avoiding bandwidth-stagnant solitons. The pulse spectral and temporal dynamics are shown in the illustrative example of Figure 1. Methods We solve the generalized 1-D nonlinear Schrödinger equation (GNLSE), a example ND waveguide in Section 3.1 and sign-alternating dispersion SiN in waveguide in Section 3.3, with dispersion terms up to the 20th order to demons spectral phase development numerically. We assume that only a single-spati propagates in the waveguide structure, as is the case for the bandwidth range of gle-mode fiber and integrated waveguide we consider in this paper. The GNLS the slowly-varying envelope approximation [1,37], is given in Equation (1) While our concept of overcoming spectral stagnation enhances the bandwidth generationto-peak power efficiency significantly [17], we would like to extend the work to explore the impact of sign-alternating dispersion for nonlinear pulse compression, both theoretically and numerically. The motivation is that nonlinear compression could now take place at much less input peak powers because of the more efficient bandwidth generation in the alternating structures compared to conventional anomalous dispersion and dispersion varying SCG [18][19][20][21][22] or concatenated anomalous dispersion SCG [23] where spectral stagnation is still intrinsically present in the methodology [17]. Lasers that otherwise have too low of a peak power, such as high-repetition sources, integrated chip lasers, or fiber oscillators, could directly be used to generate ultrashort pulses without the need for multiple amplification stages. In addition, material damage accumulated with high powered lasers would not be present; thus, a more extensive range of waveguide geometries and materials became accessible when considering sign-alternating dispersion waveguides for nonlinear pulse compression. Of primary concern when considering sign-alternation for pulse compression is the impact of a complex spectral phase (i.e., the pulse phase function associated with the frequency spectrum) that could develop in ND and AD segments. If this phase is present, it can limit temporal compression in the AD segments, limiting the total bandwidth generation and the overall achieved temporal compression. When only considering second-order dispersion, the SCG spectral phase can converge to a parabolic profile, yielding the potential for high pulse compression in sign-alternating dispersion waveguides. The convergence to a parabolic phase spectrum, under the presence of only second-order dispersion has been confirmed in numerous studies of the last few decades, emerging from the interaction of dispersion and self-phase modulation (SPM). A significant example of SCG convergence to a parabolic spectral phase profile is the optical wave-breaking effect of normal dispersion SCG for Gaussian input pulses [24]. A parabolic convergence of the SC phase spectrum is also due to the self-similar nonlinear evolution that occurs from the very specific case of highly chirped parabolic input pulses (named similaritons) [25][26][27][28]. However, the literature does not rigorously address the impact of the interaction of higher than second-order dispersion and SPM on the spectral SC phase for normal dispersion and for a general Gaussian pulse input. Since uncompensated higher-order dispersion is iteratively cascaded in sign-alternating dispersion waveguides, a complex spectral phase profile may emerge, despite the parabolic convergence demonstrated previously. In this paper, the impact of uncompensated higher than second-order dispersion and self-phase modulation in the ND and AD segments is related to the spectral phase development of the SCG pulse. Here, we additionally find that SPM reduces higher-order spectral phase coefficients so that the phase remains near parabolic despite significant uncompensated higher than second-order dispersion. Given this phase effect, we then show that, under certain conditions, the specific shape of the ND group-velocity dispersion (GVD) profile does not play a significant role in determining the higher-order spectral phase. Thus, a wide choice of ND segments becomes possible for high-quality pulse compression in the alternating structures. Of fundamental interest is that this newly discovered phase effect explains more rigorously why, in general, ND SCG has a near parabolic spectral profile found in various experiments in the last decade [29][30][31][32][33]. Ultimately, we use the found spectral phase to obtain the ideal AD segment's GVD profiles needed to make sign-alternation feasible as a method for nonlinear pulse compression. We then focus on another regime of SCG, namely when dispersion dominates, to obtain the ideal AD GVD profiles for this regime as well. Lastly, we demonstrate optimum pulse compression within the integrated photonics platform, where both AD and ND segments contribute substantially to spectral generation. We simulate an example structure on the silicon nitride platform to demonstrate our approach in obtaining compression to two optical cycle pulses. We describe how new soliton nonlinear compression dynamics emerge in sign-alternated dispersion structures that could significantly reduce the compressed duration over conventional AD SCG and overcomes the fundamental soliton duration limitation in soliton-effect pulse compression. These dynamics are substantially different from conventional nonlinear compression schemes, or even from how solitons are generally managed in periodic waveguides [1,34]. Concept of Alternating Dispersion Waveguides The goals of sign-alternation are to maximize bandwidth generation in the ND and/or AD segments and temporal compression in the AD segments. Across the sign-alternating dispersion waveguide, the spectrum increases after every segment, while the pulse duration decreases at the end of every AD segment in comparison to the duration at the previous AD segment, because of additional bandwidth generation. The AD segments ideally should compress the pulse to a near transform-limited power profile. To maintain bandwidth generation and temporal compression in the AD segments, the AD segment length is limited to the length of maximal temporal compression (in most cases this is the soliton length [1]). The AD length limitation is in contrast to, for example, dispersion alternating waveguides used to reduce parasitic nonlinearity in long distance fiber telecommunications [35,36]. Here, the length of the AD segment is largely extended past where temporal compression occurs, such that the spectral bandwidth narrows, to compensate for the increase that occurs through SPM in other elements in the waveguide and to maintain the pulse duration at a desired average value matching the input duration. Methods We solve the generalized 1-D nonlinear Schrödinger equation (GNLSE), across an example ND waveguide in Section 3.1 and sign-alternating dispersion SiN integrated waveguide in Section 3.3, with dispersion terms up to the 20th order to demonstrate the spectral phase development numerically. We assume that only a single-spatial mode propagates in the waveguide structure, as is the case for the bandwidth range of the singlemode fiber and integrated waveguide we consider in this paper. The GNLSE under the slowly-varying envelope approximation [1,37], is given in Equation (1) as where u is the complex field envelope, d k are the Taylor series coefficients of the expansion of the frequency-dependent wave number about ν o . T = t − V g z is the time coordinate, co-moving in the frame of reference of the group velocity (V g ≡ d −1 1 ). τ s is the characteristic timescale of self-steepening. For input, in Section 3.1, a transform-limited Gaussian pulse (1/e power duration, τ, of 72 fs), with a pulse energy of 2 nJ is used. The peak power and duration of this input pulse renders that SCG here is in the SPM-dominated regime of the ND fiber. We find that the typical characteristic spectral phase evolution of the SCG pulse in the SPM-dominated regime is shown with these parameters. In this numerical model, for Section 3.1 we input the ND segment GVD profile of a normal dispersion fiber (Corning Hi1060flex [17]) with substantial negative third-order dispersion (TOD). We specifically choose a dispersive system where third-order dispersion cannot be neglected and would contribute significantly to the temporal and spectral phase dynamics of the pulse if nonlinear effects were not considered. We show the GVD of this fiber in Figure 2, plotted along the envelope angular frequency range of interest. The carrier angular frequency (193.4 THz) corresponds to the zero of the envelope frequencies in this example, corresponding to a pulse input wavelength of 1.55 µm. Typical loss, including entrance splice loss, for this fiber is 0.03 dB across the used length. 72 fs ), with a pulse energy of 2 nJ is used. The peak power and d input pulse renders that SCG here is in the SPM-dominated regime of the find that the typical characteristic spectral phase evolution of the SCG pul dominated regime is shown with these parameters. In this numerical model, for Section 3.1 we input the ND segment GV normal dispersion fiber (Corning Hi1060flex [17]) with substantial negati dispersion (TOD). We specifically choose a dispersive system where third sion cannot be neglected and would contribute significantly to the tempor phase dynamics of the pulse if nonlinear effects were not considered. We show the GVD of this fiber in Figure 2, plotted along the envelop quency range of interest. The carrier angular frequency (193.4 THz ) corr zero of the envelope frequencies in this example, corresponding to a puls length of 1.55 μm . Typical loss, including entrance splice loss, for this fib across the used length. We note that self-steepening and Raman contributions are omitted t SPM-dispersion interaction. When enabling these effects, we find negligib is shown in Supplementary Materials I. The pulse parameters and group velocity dispersion of the alternat waveguide are listed in the corresponding Section 3.3. Losses for the integ mostly propagation losses (roughly 3 dB/cm ). Losses caused by the trans We note that self-steepening and Raman contributions are omitted to focus on the SPM-dispersion interaction. When enabling these effects, we find negligible impact; this is shown in Supplementary Materials I. The pulse parameters and group velocity dispersion of the alternating integrated waveguide are listed in the corresponding Section 3.3. Losses for the integrated case are mostly propagation losses (roughly 3 dB/cm). Losses caused by the transitions between different segment types are negligible since the transition occurs adiabatically. Convergence to Near Parabolic Spectral Phase in the SPM-Dominated Regime We begin by analyzing the spectral phase dynamics when substantial spectral generation occurs only in the ND segments (e.g., of the waveguide in Figure 1), such that the AD segments only serve to temporally compress the pulse, to near transform-limit duration, as it propagates along the sign-alternating dispersion waveguide. This case is particularly beneficial for pulse compression, as normal dispersion supercontinuum generation contains fewer spectral modulations, higher-order spectral phase through the well-known wave-breaking effect [24,30] and negligible modulation instability. Furthermore, we show that the specific dispersion profile in the ND segment does not play a significant role in determining the spectral phase, when the effects of SPM dominate SCG in the ND segment. In quantitative terms, the invariance to ND dispersion occurs when the nonlinear length, L nl = 1 γP , γ (V g ≡ d −1 1 ), P being the input peak power, is shorter than the dispersion length, L D = τ o 2 |d 2 | , (τ o is the intensity 1/e half duration of the pulse entering a segment, d 2 is the second-order GVD coefficient), within the segment, or the inverse soliton ratio, R ≡ L nl L D < 1 [1,24]. In the SPM-dominated regime, when R ≡ L nl L D < 1, the normalized rate of the 1/e spectral energy density bandwidth increase, across the propagation coordinate, z, given as 1 ∆ν o d∆ν dz is much higher than the rate of duration increase, given as 1 τ o dτ dz , for a large part of bandwidth generation, i.e., before the overall duration increase reduces the rate of bandwidth increase such that bandwidth stagnation occurs. ∆ν labels the 1/e bandwidth, ∆ν o is the initial bandwidth, τ labels the 1/e power duration with τ o being the original duration. This steeper increase in bandwidth compared to duration results in the spectral phase, ϕ, profile being stretched over a larger bandwidth without correspondingly increasing its values to maintain its shape (see illustration in Figure 3), resulting in a decrease in its curvature and thus phase Taylor coefficients. Figure 3 is an illustrative example of a pulse travelling in the SCG-dominated regime, where the pulse duration remains approximately the same between two propagation locations. This results in the phase order coefficients exponentially lowering with the order number, i.e., with variables as defined in Figure 3, the phase coefficients at the latter propagation location in the illustration are given by Equation (2) with derivation in Supplementary Materials II, The progressive reduction in higher-order derivatives results in a dominant parabolic spectral phase profile to emerge, independent of higher-order dispersion in the waveguide. In ND waveguides, this spectral phase scaling effect, combined with the wave-breaking effect, produces the near parabolic spectral phase profile. Equation (2) of the illustrative case extends to the general SPM ND SCG-dominated case (described above), as will be shown in Sections 3.1.1-3.1.4. To show the spectral phase reduction in more detail numerically and analytically, we first start by plotting the 1/e angular frequency bandwidth of the spectral energy density,   , against the propagation distance,  , within the ND fiber in Figure 4, curve c; to obtain the bandwidth increase dynamics for the example ND waveguide segment. The resulting saturation curve is characteristic of spectral bandwidth development in ND SCG within the SPM-dominated regime. The bandwidth increases due to SPM which is dependent on pulse power and duration. We show the propagation up to 30 cm , which is approximately the saturation length, sat L , of the bandwidth development. SPM Effects on Second-Order Spectral Phase We start our analysis of the Taylor coefficient development of the spectral phase with the second-order contribution ( . We plot the ratio of 2  with SPM to the 2  (labeled as 2o  ) without SPM, versus propagation in Figure 4, curve a, obtained from the GNLSE numerical simulation. 2o Here we see qualitatively the expected trend as described above, i.e., a decrease in 2  past a certain propagation distance (when the pulse obtains a specific bandwidth value). Then when spectral development ceases, close to the saturation length, the reductive effect of SPM on 2  stops and 2  asymptotically approaches 2o  i.e., the ratio plotted in Figure 4, curve a, approaches one. However, at the beginning of the pulse propagation, there is an increase in 2  to a peak value at approx. 2 cm before its subsequent decrease. While the peak of 2  can be To show the spectral phase reduction in more detail numerically and analytically, we first start by plotting the 1/e angular frequency bandwidth of the spectral energy density, ∆ν, against the propagation distance, ∆z, within the ND fiber in Figure 4, curve c; to obtain the bandwidth increase dynamics for the example ND waveguide segment. The resulting saturation curve is characteristic of spectral bandwidth development in ND SCG within the SPM-dominated regime. The bandwidth increases due to SPM which is dependent on pulse power and duration. We show the propagation up to 30 cm, which is approximately the saturation length, L sat , of the bandwidth development. The larger decrease in , showing analytically the phase convergence to a near parabolic profile even with substantial higher-order dispersion. A similar method to the third-order coefficient derivation is used to extend the analysis to higher-orders, where, maximally, showing that in the general case of SPM-dominated ND SCG, the conclusion of Figure 3, that the reductive effect of SPM on phase coefficients increases with order exponentially with the frequency bandwidth holds. The exponential reduction in phase coefficients naturally leads to the overall phase function being parabolic as shown from the results of the GNLSE simulation, shown in Figure 5. Figure 5 is the plot of the percentage of third-order spectral phase contribution to second-order contribution at the 1/e bandwidth frequency value versus propagation SPM Effects on Second-Order Spectral Phase We start our analysis of the Taylor coefficient development of the spectral phase with the second-order contribution (β 2 ≡ d 2 ϕ . We plot the ratio of β 2 with SPM to the β 2 (labeled as β 2o ) without SPM, versus propagation in Figure 4, curve a, obtained from the GNLSE numerical simulation. β 2o is equal to d 2 ∆z, d 2 is the waveguide GVD coefficient (taken about ν o ). Here we see qualitatively the expected trend as described above, i.e., a decrease in β 2 past a certain propagation distance (when the pulse obtains a specific bandwidth value). Then when spectral development ceases, close to the saturation length, the reductive effect of SPM on β 2 stops and β 2 asymptotically approaches β 2o i.e., the ratio plotted in Figure 4, curve a, approaches one. However, at the beginning of the pulse propagation, there is an increase in β 2 to a peak value at approx. 2 cm before its subsequent decrease. While the peak of β 2 can be attributed to parabolic phase additions of SPM, within the beginning region of propagation, the subsequent decrease in β 2 is due to the reductive effect of SPM on the spectral phase coefficients, as explained at the start of Section 3.1. More details of how the peak emerges can be found in supplementary information. For further pulse propagation past the peak of the second-order spectral phase coefficient, it is shown in Supplementary Materials III that the maximal value of β 2 scales as shown in Equation (3), when factoring in higher-order phase contributions. Going back to Figure 4 curve a, β 2 , for most of its development, is higher than β 2o , and only decreases to 96% the value of β 2o . A stronger reduction in β 2 below β 2o can be achieved, for example, with input pulses at shorter pulse durations where the peak power is raised such that the R ratio is conserved. However, what is more critical for high-quality pulse compression is to maintain a parabolic profile despite higher-order uncompensated dispersion. Given this sufficiency, the criterion simply becomes that, regardless of the value of β 2 , the total second-order spectral phase contribution, ϕ 2 ϕ 3 . SPM Effects on Third-Order and Higher Spectral Phase The criterion for a parabolic phase profile is achieved via a higher reductive effect of SPM on β 3 . It is derived in Supplementary Materials IV that β 3 scales at most with the bandwidth as indicated in Equation (4), Thus, β 3 's dependency, |β 3 (∆z)| ∝ 1 ∆ν(∆z) 2 , on SPM bandwidth generation scales much higher than that of β 2 (β 2 ∝ 1 ∆ν ). The higher dependence of β 3 on the bandwidth contributes to the peak magnitude of β 3 /β 3o being less than that of β 2 /β 2o and the decrease in β 3 /β 3o going to a lower value versus propagation. To verify the above, we plot the ratio of β 3 , to β 3o = d 3 ∆z along the fiber length, in Figure 4, curve b, obtained from the GNLSE numerical simulation. The figure shows that indeed the peak reduction and a larger decrease in β 3 occurs, in comparison to the propagation dynamics of the second-order coefficient. The ratio of β 3 to β 3o grows at first, to about 90 percent and then descends to a minimum of 13% at about 20% L sat (5 cm). In the region where bandwidth saturation starts to occur, the ratio then grows again and eventually approaches one slowly, in the asymptotical limit, for the same reasons given for the second-order coefficient case. However, even at the saturation length, the ratio is substantially below unity, and across the full propagation remains below one. SPM substantially reduces the third-order spectral phase coefficient below the value found by only considering dispersion without SPM everywhere in the ND fiber. The larger decrease in β 3 (∆z) versus β 2 (∆z) with respect to frequency, leads to the ratio of β 3 (∆z) β 2 (∆z) approximately scaling as showing analytically the phase convergence to a near parabolic profile even with substantial higher-order dispersion. A similar method to the third-order coefficient derivation is used to extend the analysis to higher-orders, where, maximally, |β n (∆z)| ∝ τ(∆z) ∆ν(∆z) n−1 , showing that in the general case of SPM-dominated ND SCG, the conclusion of Figure 3, that the reductive effect of SPM on phase coefficients increases with order exponentially with the frequency bandwidth holds. SPM Leads to a Parabolic Spectral Phase Convergence The exponential reduction in phase coefficients naturally leads to the overall phase function being parabolic as shown from the results of the GNLSE simulation, shown in Figure 5. Figure 5 is the plot of the percentage of third-order spectral phase contribution to second-order contribution at the 1/e bandwidth frequency value versus propagation distance in the fiber. The percentage value is at a maximum at the 1/e bandwidth frequency, representing the maximal deviation from a parabolic phase profile. Photonics 2021, 8,50 The ratio with SPM contributions stays well below the same ratio w present. This ratio dips to a factor of 23 less than the linear case (0.2%), ind convergence to a parabolic profile at 20% the spectral saturation length (5 c 3  shows its minimum in Figure 4, curve b). Turning to the dispersion profile of the subsequent AD segment need the pulse coming from the ND segment, we find that the AD GVD prof The ratio with SPM contributions stays well below the same ratio where no SPM is present. This ratio dips to a factor of 23 less than the linear case (0.2%), indicating a strong convergence to a parabolic profile at 20% the spectral saturation length (5 cm-also where β 3 shows its minimum in Figure 4, curve b). Impact of SPM Induced Phase Coefficient Reduction in the Design and Context of Alternating Dispersion Waveguides Turning to the dispersion profile of the subsequent AD segment needed to compress the pulse coming from the ND segment, we find that the AD GVD profile must satisfy for close to transform-limited pulse compression [38]. The AD GVD profile must then be close to a flat profile in the SPM-dominated regime of SCG and is not strongly dependent on the ND GVD profile, through the above-demonstrated invariance of the ND SCG spectral phase to it. In a practical alternating dispersion waveguide setup, the optimum ND segment length should be chosen such that the magnitude ratio of output spectral phase coefficients, e.g., β 3 to β 2 , obtained from numerical simulations like the ones shown above, should be close to the corresponding ratios found from dispersion at the end of the AD segments, so that compression is close to the transform limit in the AD waveguide segments. The optimal ND length is usually where the phase is the most parabolic since this reduces the complexity of the AD GVD profiles and enables high subsequent pulse compression. Simultaneously, the chosen ND length must also yield a high enough bandwidth increasement factor (output bandwidth to input bandwidth-usually > 1.5) at that ND segment, so that losses are minimized across the structure (since less segments then are needed). In our example, the optimal ND segment length is shown as the vertical dotted line in Figures 4 and 5 at 5 cm. Interestingly, the ratio of ϕ 3 /ϕ 2 and β 3 /β 2 does not minimize when β 2 reaches a peak but minimizes when β 3 is minimized. At this ND length, the bandwidth increasement factor at the end of the segment is 2.5, which is considered large for segments in sign-alternating dispersion waveguides. A non-ideal AD GVD profile in the SPM-dominated regime of SCG may not be a problematic criterion since the waveguide consists of subsequent periods of ND-AD segments. After the pulse emerges from this (non-ideal) AD segment, it would still go into a subsequent ND segment, where the nonlinear generation there would reduce the higher-order spectral phase coefficients of the entering pulse. In the context of the sign-alternating dispersion waveguides, the bandwidth entering subsequent ND segments continually increases, resulting in a more considerable decrease in higher-order spectral phase coefficients (e.g., as seen in Equation (4)). This more considerable decrease would, in turn, result in a higher convergence to a parabolic spectral phase profile versus a uniform ND SCG waveguide across the sign-alternating dispersion structure for segments where SCG is in the SPM-dominated regime. The advantage of sign-alternating dispersion waveguides for pulse compression then lies both in spectral bandwidth versus input peak power efficiency and in the obtained spectral phase profile for pulse compression applications. For example, this reduction in the input phase explains the experimental result described in the results section of [17], that specifically, the maximum third order phase contribution, occurring at the endpoints of 1/e bandwidth of the SCG pulse, is a factor of three less than the linear case at the output. The factor of three reduction is despite uncompensated higher-order dispersion in the AD segments. AD GVD in the Dispersion-Dominated Regime Having obtained that a flat AD GVD profile is ideal for the SPM-dominated regime, we turn our attention to when dispersion dominates over the pulse's propagation in the supercontinuum generation (e.g., in the ND segments where R > 1). The dispersiondominated regime is always present in sign-alternating dispersion waveguides after many alternations or with exceedingly low input powers. In general, the AD GVD profiles would have to transition from that for the beginning SPM SCG regime to those belonging to a transition regime and then to the ideal profile for the last dispersion regime across the alternating structure. The Supplementary Materials V contains a general method for making the ideal AD GVD profiles that compensate for a general spectral phase from the ND segment. The method constructs the AD segment from sub-segments, each of which solves for a particular dispersion coefficient. The AD GVD profiles become more critical for the dispersion-dominated regime as there is no substantial SPM phase effect present anymore. The AD GVD profiles would converge to a scaled reflection of the previous ND segment GVD curve about the frequency axis, with the scaling factor determining the AD segment's length, i.e., what would be expected in linear pulse compression, as shown in Equation (5) AD GVD(ν) = −c × ND GVD(ν), c > 0 (5) and the AD segment length is the previous ND segment length divided by c. From Equation (5), for a constant ND segment length, the AD segments converge to a constant length (provided the same AD and ND waveguides are used throughout the alternating structure). Then, periodic waveguides can be constructed, where the spectrum linearly increases across the waveguide's ND segments [17], provided losses are negligible. Ultimately, this allows for sign-alternation to be used in a resonator or pulse circulator configuration, e.g., in a similar scheme as [39,40]. The dynamics within the dispersion-dominated regime provide another option for the design of AD segment dispersion. Instead of maximal pulse compression within a few segments, where the AD segment profiles become hard to engineer and are changing, one can use a simple periodic arrangement of highly dispersive unchanging segments. This simple arrangement would achieve the same spectral frequency bandwidth, so also the temporal compression ratio, albeit with many more segments. The increased number of segments makes it critical to minimize and to understand how losses affect bandwidth generation. The results of [17] for the dispersion-dominated regime is extended to incorporate the effects of losses, i.e., the total bandwidth increase as a function of the amount of bandwidth generating segments, n, with losses is given in Equation (6) as, where, ε is the segment-specific loss and δν ≈ 0.81γ E 4|β 2 | (E being the pulse energy) is the bandwidth increase in one segment. Equation (6) becomes a linearly increasing function versus n if losses are negated. The dispersion-dominated regime has been explored in a sign-alternating dispersion silica fiber waveguide [41], although not stated as such in that publication. The authors manage to extend the spectral generation to obtain a moderate compression factor by use of this regime across all six segments except for the first two. While the experimental results in [41] do not significantly differ from [17], the potential for using the dispersive SCG regime for pulse compression is shown. Sign-Alternating Dispersion in Integrated Photonics A valid concern on pulse compression would be the impact of spectral generation in AD waveguide segments in a practical setting. For example, if the input pulse energy exceeds the fundamental soliton energy [1], E ≈ 4.5|β 2 |/(γτ o ) in the AD segments, where τ o is the 1/e duration of the pulse entering the segment, substantial spectral generation can take place in these segments even with low nonlinear coefficient compared to that of the ND segments. To address AD segment nonlinear compression, we simulate the pulse evolution in a sign-alternating dispersion Silicon Nitride waveguide where both segment types have high nonlinear coefficients (ND: 2.24 (W·m) −1 , AD: 0.33 (W · m) −1 ). The GVD profiles versus pulse envelope angular frequency for both segment types are given in Figure 6. that of the ND segments. To address AD segment nonline pulse evolution in a sign-alternating dispersion Silicon segment types have high nonlinear coefficients ( . The GVD profiles versus pulse envel segment types are given in Figure 6. The waveguide input pulse has a 1/e duration of 144 fs with a Gaussian profile centered at 1550 nm. The input pulse energy of 100 pJ is substantially higher than the soliton energy in the first AD segment (approx. 40 pJ). Thus, generation is carried out in both ND and AD segment types. The constructed sign-alternating waveguide's temporal development is indicated in Figure 7a, where a final 1/e pulse duration of approximately 11 fs is obtained (approx. two optical cycles), giving a temporal compression factor of 12. Additionally, indicated in the figure are the waveguide segments' lengths, where red indicates an ND segment and yellow an AD segment. Figure 7b compares the simulated pulse power profile versus pulse time, normalized to the peak power of both the transform-limited profile and the output profile from the sign-alternating structure. However, the output pulse exhibits more features below the 20 percent level of the peak power than the transform-limited profile, of duration 10 fs, due to the uncompensated phase from AD SCG. The pulse energy contained in these features are approx. 30% of the total pulse energy. Thus, 70% of the pulse energy lies in the main peak, which still may be adequate for further frequency generation experiments. Figure 7b compares the simulated pulse power profile versus pulse time, normalized to the peak power of both the transform-limited profile and the output profile from the sign-alternating structure. However, the output pulse exhibits more features below the 20 percent level of the peak power than the transform-limited profile, of duration 10 fs , due to the uncompensated phase from AD SCG. The pulse energy contained in these features are approx. 30% of the total pulse energy. Thus, 70% of the pulse energy lies in the main peak, which still may be adequate for further frequency generation experiments. Discussion To put the performance of the sign-alternating dispersion SiN waveguide into context, we compare it to the current state-of-the-art SiN waveguide system with comparable input pulses [18]. We find that this example's compressed duration and overall compression ratio is substantially less than those of [18] at lower power requirement. The work of [18] obtains a 1/e duration of 40 fs with two-segment concatenated AD SiN waveguides, with a compression ratio of 5, using a pulse energy of 220 pJ. The concatenated system has more than a factor of two pulse energy required with more than a factor of two less compression ratio than our concept. The increased power efficiency with larger compression ratio, found by sign-alternating the dispersion, compared to conventional methodologies, is then demonstrated. New soliton and pulse compression dynamics can explain the substantially higher performance of sign-alternation than the state-of-the-art soliton effect pulse compression that these structures possess. Two factors limit current AD pulse compression: 1. The minimal compressed duration is the duration of the fundamental soliton allowed with a given pulse energy because spectral generation terminates once a soliton is formed, and the fundamental soliton is the shortest allowed. 2. The final pulse duration may not reach the lower limit duration described in 1., since formation of a higher-order soliton, at a narrower bandwidth stops nonlinear compression to the fundamental. The higher-order soliton forms first because its generation length is shorter by a factor of the soliton number than that of the fundamental soliton [1,40]. Both limitations are bypassed in the sign-alternating dispersion waveguide. To address limitation 1: Spectral generation in the ND segments, partnered with access to a new regime of spectral generation-named chirped-pulse temporal compression-in the AD segments removes the fundamental soliton limitation, as these processes do not depend on the dynamics of soliton formation. The chirped-pulse temporal compression arises when the chirped-pulse from the first ND segment enters the next AD segment, where the pulse temporally compresses. In this region, the pulse exhibits a positive second-order spectral phase and a stretched duration from its input profile, which are reduced as the pulse propagates. This reduction increases the pulse's bandwidth through SPM. Notably, this chirped-pulse region is shorter than the linear case since bandwidth generation continually reduces the pulse's dispersion length. When the pulse's spectral phase profile has zero second-order phase, nonlinear pulse compression to a soliton extends the spectral generation further. The extended pulse compression is the second region of propagation named "further soliton nonlinear compression." Within this region, the second-order phase remains zero, and the typical AD SCG dynamics of nonlinear pulse compression to the soliton fission propagation point, z s [1] occur. Both regions are depicted in the first AD segment shown in Figure 8, which shows the waveguide's spectral development. Having defined the two central regions of AD nonlinear compression in the alternating waveguide, we turn back to how the fundamental soliton limitation in conventional solitoneffect compression is surpassed. Once the pulse bandwidth at the exit of the "chirped-pulse compression region" is equal to the fundamental soliton 1/e bandwidth of a given AD segment, namely, ∆ν sol = 0.71 E |β 2 | γ, soliton effect compression cannot occur, and the "further soliton nonlinear compression" region is not present. However, once the fundamental soliton bandwidth is reached, spectral development still occurs in the ND and AD segments since ND SCG and the "chirped-pulse compression" region are still present (as seen in Figure 8). Therefore, further nonlinear pulse compression occurs past the fundamental soliton duration. Turning to an example, for our simulation, the fundamental soliton bandwidth associated with the pulse energy is ∆ν sol ≈ 120 THz, which is surpassed in the first AD segment. The output bandwidth of 153 THz in the first segment is slightly higher than the soliton bandwidth. This is attributed to higher order dispersion in the AD segment that reduces the GVD from the central value (as seen in Figure 6) and thus enhances SPM bandwidth generation. In contrast, the bandwidth predicted from the idealized soliton conditions only assume a flat GVD. Because the soliton bandwidth is surpassed in the first segment, for all proceeding segments, soliton effect compression cannot take place. Nevertheless, by the remaining spectral generation mechanisms in the ND and AD segments, the total spectral development of 437 THz, and the corresponding temporal compression ratio are substantially above the fundamental soliton limitation in conventional soliton-effect compression (i.e., approx. a factor of three larger). We expect that in the limit of many segments, for both segment types, spectral generation would converge to what is indicated in Equation (6) in later dispersion-dominated segments. Therefore, the bandwidth increase will converge to a linear increase, provided the AD GVD profile satisfies Equation (5) for latter segments, and losses are managed. For our example, within the output pulse bandwidth, the AD GVD profile satisfies Equation (5) with c ≈ 0.1, contributing to the high pulse compression factor obtained. The continual spectral increase then removes any minimal duration limitation present in current AD nonlinear compression schemes. To address limitation 2: Before the fundamental soliton bandwidth can be reached, the pulse may stop its duration decrease due to it forming a higher-order soliton. However, this limitation is completely bypassed in sign-alternating the dispersion. Again, the additional mechanisms of spectral increase in the ND and AD segments, e.g., the AD chirped-pulse region, disrupts formation to a higher-order soliton, allowing access to the full fundamental soliton bandwidth and then beyond by the mechanisms described in addressing limitation 1. To show how a higher-order soliton is disrupted, we start with the conventional non-alternated AD SCG nonlinear compression dynamic. In the conventional case, the pulse typically shapes into the highest higher-order soliton possible, where the soliton order, m, and 1/e duration, τ sol , satisfy Equation (7) [1], such that τ sol is less than the input pulse duration. Subsequently, the pulse then undergoes soliton fission into solitons of approximately the same duration as the higher-order soliton, with no further temporal compression. Since, for higher-order solitons, m 2 > 1, τ sol is higher than that of the fundamental soliton allowed by the waveguide. This effect particularly limits the compression ratio scaling with energy of AD nonlinear pulse compression [1,42,43]. In contrast to the conventional case, once the higher-order soliton is reached in an AD segment, there is still bandwidth generation in the next ND segment and the subsequent AD segment's chirped pulse compression region. This bandwidth generation reduces the duration found at the start of the next AD segment's further nonlinear soliton compression region, labeled τ , compared to the duration at the end of the previous AD segment (the end duration is close to the segment's higher-order soliton duration, τ sol ), i.e., τ < τ sol . The new τ sol obtained at the end of the AD segment, is, therefore, smaller than or equal to τ . From Equation (7), the new τ sol must correspond to a smaller m 2 , i.e., soliton order, than in the previous AD segment. In this fashion, the reduction in soliton order for subsequent AD segments happens until the fundamental soliton bandwidth condition is reached, i.e., when m 2 = 1. In sum, the reduction in soliton order combined with further bandwidth generation beyond the fundamental soliton bandwidth, allowed with sign-alternation, lowers the obtained pulse durations at a given input pulse energy over conventional AD SCG which are limited to the duration and spectrum of the first higher-order soliton reached. Conclusions We found that ND SCG is robust to higher-order dispersion through a newly shown horizontal scaling of the spectral phase. Therefore, ND segments do not have to be chosen for a specific dispersion profile shape in sign-alternating dispersion waveguides used for nonlinear pulse compression. This robustness is increased within our sign-alternating structures, rendering that our waveguide concept combines a near parabolic spectral phase profile with an increased input power to bandwidth efficiency. Thus, we foresee that our scheme has the potential for few-cycle pulse compression without the onus of high peak power drive lasers, rendering accessible new laser sources (e.g., high repetition rate lasers, integrated photonics laser sources) for pulse compression. Furthermore, conditions that lead to the AD and ND segment lengths to converge to a constant are explored, along with the needed corresponding segment dispersion profiles. The convergence enables the use of a sign-alternation dispersion waveguide SCG in resonator configurations. Results of nonlinear pulse compression in the integrated photonics setting were shown. The final compressed pulse had a duration of 11 fs, being two-optical cycles. This duration was a factor of three less than the duration obtained by conventional soliton dynamics. We then described new soliton dynamics that emerge within the alternated structure that can significantly enhance pulse compression. Funding: The authors would like to acknowledge funding from the MESA+ Institute of Nanotechnology within the grant "Ultrafast switching of higher-dimensional information in silicon nanostructures". The authors would also like to acknowledge funding from the Netherlands Organisation for Scientific Research (NWO) Demonstrator grant, No. 18562.
9,875.6
2020-01-07T00:00:00.000
[ "Physics" ]
A Day-Ahead Photovoltaic Power Prediction via Transfer Learning and Deep Neural Networks : Climate change and global warming drive many governments and scientists to investigate new renewable and green energy sources. Special attention is on solar panel technology, since solar energy is considered one of the primary renewable sources and solar panels can be installed in domestic neighborhoods. Photovoltaic (PV) power prediction is essential to match supply and demand and ensure grid stability. However, the PV system has assertive stochastic behavior, requiring advanced forecasting methods, such as machine learning and deep learning, to predict day-ahead PV power accurately. Machine learning models need a rich historical dataset that includes years of PV power outputs to capture hidden patterns between essential variables to predict day-ahead PV power production accurately. Therefore, this study presents a framework based on the transfer learning method to use reliable trained deep learning models of old PV plants in newly installed PV plants in the same neighborhoods. The numerical results show the effectiveness of transfer learning in day-ahead PV prediction in newly established PV plants where a sizable historical dataset of them is unavailable. Among all nine models presented in this study, the LSTM models have better performance in PV power prediction. The new LSTM model using the inadequate dataset has 0.55 mean square error (MSE) and 47.07% weighted mean absolute percentage error (wMAPE), while the transferred LSTM model improves prediction accuracy to 0.168 MSE and 32.04% wMAPE. Introduction Following the necessity of smart grids and microgrids, whose dependence on renewables has been increasing recently-specifically PV plants, since the net-zero emission policies settled for the decarbonization of the electricity generation sector-the necessity of production of affordable forecasting of PV power output has become a primary issue. PV power predictions are helpful since the variability of global radiation can affect the amount of electricity production and also grid stability. Therefore, identifying reliable forecasting can help to improve system stability, providing possible power generation for the future. In particular, this process is useful when the energy production comes not only from PV plants but from a combined system of electricity generators. Affordable forecasting leads to energy optimization and management, making PV integrable into smart buildings and also charging infrastructures for electric vehicles (EVs) [1,2]. Therefore, providers require a way to implement a switching controller to shift from one energy source to another to optimize the combination of electricity sources [3][4][5][6]. Recent studies have explored various methods to forecast photovoltaic (PV) power output, including phenomenological, statistical, machine learning, and hybrid approaches [7]. Deterministic forecasting predicts power production by examining and modeling a specific phenomenon, but this method can be inadequate as it ignores uncertain data. On the other hand, statistical and machine learning approaches have many benefits over deterministic forecasting. They are capable of dealing with complex relationships, providing more accurate forecasts, managing unstructured data, automating the forecasting procedure, Table 1. Forecasting type, method, and utility based on the different approaches for predictions [8,11,14,[18][19][20][21]. Approach Type Forecasting Type Method Utility Phenomenological approach Medium/longterm forecasting Numerical weather prediction, satellite images for regional models. Maintenance and PV plant planning. Statistical approach Short-term forecasting up to one day ahead Include regression models, exponential smoothing, autoregressive models, autoregressive moving integrated average, time series ensemble, and probabilistic approaches. Control of power system operation, unit commitment, and sales. ML approach From short-term forecasting up to the long-term horizon Cross-sectoral method, which combines models and Artificial Intelligence. Production, anomaly detection, and energy disaggregation. Hybrid approach From short-term forecasting up to the long-term horizon Combine one of the mentioned advanced methods with one physical or statistical approach. From short-term power production to maintenance and plant planning. Probabilistic approach From short-term forecasting up to medium-term horizon Provide output with quantile, interval and density function. Electric load forecasting The ML approach is a powerful tool that leverages the computational power of artificial intelligence. This approach can learn from historical data and continuously improve its predictive ability. As a result, it can identify unreliable and inconsistent data without the need for explicit formulae [22,23]. Consequently, the use of ML has expanded to a wide range of fields, including pattern recognition, data mining, classification, filtering, and forecasting, due to its ability to handle and process large amounts of data and improve its accuracy over time [9]. Its adaptability and effectiveness in solving complex problems have made it a popular and widely used technique across various industries. Among the ML techniques are artificial neural networks, multilayer perceptron neural networks, recurrent neural networks, feed-forward neural networks, and feedback neural networks. Nowadays, the state of the art are deep learning (DL) and deep neural networks, which are a specific artificial neural network. The main characteristic is the possibility to create a complex and complete model from huge dataset input and through improved learning algorithms, better parameter analysis methods and numerous hidden layers [24]. DL is a machine learning technique that uses algorithms to make predictions based on the logic found in the input data. It improves the ability to identify local optima and estimate aggregation rates [3]. Several DL techniques exist, working on different types of data in their algorithms and they can be clustered by the application. DL techniques are widely used in forecasting related to electric power system applications, such as load forecasting, renewable power production, power quality disturbance detection, and fault detection [25][26][27]. Deep learning can be divided into several categories, each with a different approach to learning from data [26]. In deep supervised learning, the algorithm uses labeled data to make accurate predictions with minimal error. Deep semisupervised learning uses a combination of labeled and unlabeled data for training. On the other hand, deep unsupervised learning does not rely on labeled data and instead focuses on finding patterns in the dataset itself [28]. Another essential aspect of deep learning is deep reinforced learning, which utilizes reinforcement learning techniques to optimize decision-making in fields such as building energy management and smart grid applications. In this approach, the goal is to increase rewards through responses to changing conditions [29]. The ML application process able to forecast targets is divided into three main steps: preprocessing, forecasting, and evaluation. In the first part of the process, the dataset is preprocessed in order to be in the correct format, with no missing data values, outliers, or erroneous values. In this stage of the process, the required characteristics are identified and selected. During the forecasting stage, the known target values of the data are processed with the selected feature set to implement the prediction model. Thus, in the last stage, models are generated, evaluated, and merged using statistical evaluations. Finally, the best model and feature set is used to process data and generate predictions [24]. Following the ML techniques above, several worldwide applications were implemented with different aims [9,30,31]. Ref. [32] is an in-depth review of condition monitoring of PV systems based on ML, which have been divided into three subcategories: ordinary sensors, image acquisition (conventional ML and DL), and knowledge-driven. In addition, [33] presented a case study in Malaysia where ML was used to implement power plant planning with the cooperation of GIS tools and remarked on the capability of AI to make other sources interoperable with PV plants. As mentioned, ML is used also to identify not only production and faults but also if issues linked to shaded or partially shaded cells occur [34]. In [35] is proposed a case study with an innovative ML model for short-term PV power prediction, and in [36] PV output predictions are applied to ships. Significant innovative applications operate in the Middle East region, as reported in [37], where three ML models for PV power output in Saudi Arabia are implemented. Similarly, in [7,38] an ML-based prediction was studied for PV power forecasting considering several environmental parameters in Qatar. It could happen that for one specific issue, there is no possibility to work on historical data, which can help to create a forecasting model based on the techniques described. Therefore, since it is difficult to build an accurate model or leverage historical data collection or learning, similar learned situations with other data can be used [39]. Transfer learning (TL) is an ML method where the model can apply new challenges thanks to a knowledge transfer from a related challenge learned [40]. The TL process requires similar environments for the replicability of the model, and a validation process, due to the dependence on applications and the difficulty in generalizing for some necessities. An interesting application of the TL model is proposed in [41]: given the difficulty in receiving enough data for monthly forecasting of electric load, a modern predictive scheme based on TL is proposed using similar data from other cities or districts. Many other applications suit TL perfectly, such as the ones reported in [42]. Among the different applications, also in PV output prediction, TL techniques have shown their value. TL deals with the automatic detection of PV module defects [43]. Finally, in [44], TL is proposed to predict PV power output through historical irradiance data, hyperparameters of a long short-term memory neural network, and fine-tune the deep transfer model with output data. Accurate PV power prediction based on machine learning models requires a rich historical dataset that includes years of PV power outputs to recognize hidden patterns between the most impact variables related to PV productions. In recent years, deep learning has provided a unique capability in extrapolation and prediction in various applications, such as PV or solar energy generation. Since the reliability of these methods depends heavily on historical datasets, these advanced methods are ineffective in making an accurate prediction in conventional ways, especially in newly installed PV plants. Therefore, this study presents a new framework based on the transfer learning method to transfer learned knowledge from the deep learning models of old PV plants to newly installed PV plants in the same region. This study sheds light on the application of transfer learning in day-ahead PV power prediction, demonstrating its potential to significantly improve the efficiency and performance of newly installed PV plants. The proposed framework is a novel approach that leverages the knowledge obtained from the deep learning models of established PV plants to tackle the challenges faced by newly installed ones for day-ahead PV power prediction. This approach not only reduces the need for extensive training data but also ensures that new plants can benefit from the experiences and insights gained from the existing ones. Therefore, the main contribution of this study lies in the transfer learning ability to promote the efficient and effective deployment of new PV plants, thus contributing to the sustainable development of renewable energy. The findings of this study indicate that the transferred models that have been retrained using the new dataset outperform other models. Of the nine models presented in this study, the retrained transferred LSTM model demonstrated the best accuracy, as evidenced by its low MAE of 0.211, MSE of 0.168, MAPE of 74%, RMSE of 0.403, and wMAPE of 32.04. The achieved results demonstrate the effectiveness of the proposed approach and provide strong support for the viability of transfer learning in the context of day-ahead PV power production for a newly installed PV plant. The remainder of the article is structured as follows. Section 2 explains the methodology used in this study based on neural networks and transfer learning. The results of the modeling and discussion about achieved outcomes are presented in Section 3. Section 4 summarizes with final remarks and conclusions. Methodology The different deep learning models, such as feedforward neural network (FNN), convolutional neural network (CNN), and long short-term memory (LSTM), have been used in this paper to analyze the effectiveness of transfer learning in predicting day-ahead PV power production in newly installed PV farms. FNN is a simple and straightforward model that can be used for basic prediction tasks. CNN is particularly suitable for image and signal processing tasks, making it an ideal choice for analyzing time-series data with a strong spatial component, such as day-ahead PV power production in a newly installed PV farm. LSTM, on the other hand, is a type of recurrent neural network (RNN) that is particularly effective in capturing long-term dependence in sequential data. In predicting day-ahead PV power production, LSTM can effectively capture the temporal dynamics of the data and make more accurate predictions according to the literature. The FNN model performed well in the initial stages, but the CNN and LSTM models provided better results with their ability to extract spatial and temporal features. The models are trained with the Adam stochastic optimization method and exponential decay learning rate technique for 1500 epochs. A Bayesian optimization algorithm has chosen the hyperparameters for each network. Linear Model Linear regression is a statistical method that finds the best linear relationship between independent variables (also known as predictor variables or explanatory variables) and dependent variables (also known as outcome variables or response variables). Linear regression can be used for both simple and multiple regression analysis and is widely used in various fields to make predictions about real-world phenomena, such as economics, finance, and social sciences. Linear regression is a powerful tool that can be used to make predictions about future outcomes based on past data. However, this model assumes a linear relationship between the variables, which may not always be the case in real-world situations. Therefore, there may be better methods for modeling complex nonlinear relationships. Feed-Forward Neural Network The feed-forward neural network or dense network is the first and most straightforward neural network used in many applications, such as regression, classification, clustering, optimization, and forecasting. In this type of neural network, the information always moves forward (in one direction only) to learn the patterns from inputs associated with desired outputs. In other words, FNNs have no loops or cycles in their network. The feedforward neural network architecture in this study consists of eight layers of dense and dropout, which are stacked together. The dense layer has 256 neurons with rectified linear unit (ReLU) activation function, while the output is a dense layer of 24 neurons with the sigmoid activation function. Convolutional Neural Network The convolutional neural network is a popular neural network for analyzing images that learns patterns by applying convolutional filters with different kernel sizes and pooling layers on inputs. The one-dimensional convolutional neural network works similarly to two-or three-dimensional CNN to analyze 1D signals, texts, or other sequences. The convolutional neural network architecture in this study consists of six layers of one-dimensional convolution and dropout, which are stacked together. The one-dimensional convolutional layer has 184 filters with rectified linear unit (ReLU) activation function, while the last layer is linear with 24 outputs. Long Short-Term Memory Network The long short-term memory network is one of the most advanced neural networks to analyze sequences, taking into account the dependence between each time step of input feature space, like recurrent neural networks (RNNs). The LSTM cell has various so-called gates to improve the performance of regular RNNs by avoiding vanishing or exploding gradient issues occurring in RNNs. The long short-term memory network architecture in this study consists of four layers of LSTM and dropout, which are stacked together. The LSTM layer has 120 neurons with the hyperbolic tangent activation function, while the last layer is linear with 24 outputs. Transfer Learning Transfer learning is transferring the learned knowledge from a similar task to new problems. In this method, a model trained with a large dataset is reused for a new task in which insufficient data are available. One of the main advantages of this method is that the pretrained model has learned a rich set of patterns from a problem set with a considerable amount of data. Applying such a model to a new similar task with considerably few data improves the performance of modeling. Transfer learning also saves computational resources by using the pretrained model. This study first trains the model on a PV system with a rich historical dataset, then reuses the model on a newly established PV system in the same region. The Model Framework This study uses an hourly historical dataset of two different PV power farms in the same neighborhood. The two different PV power farms located within proximity of 1.25 km are analyzed. As presented in Table 2, database one (db 1) encompasses a longer data period compared to database two (db 2). In order to train a precise model, db 1 is utilized, while db 2 serves as a testing ground to evaluate the effectiveness of transfer learning in predicting day-ahead PV power production. The datasets consist of information on PV power output, ambient temperature, and humidity. This study aims to investigate the potential of transfer learning in improving the accuracy of day-ahead PV power prediction for the PV power farm with limited historical data (db 2). The study will compare the prediction accuracy of the model trained on db 1 with the accuracy of the transfer-learned model. The results of this study will provide valuable insights into the feasibility of using transfer learning in real-world applications for day-ahead PV power prediction, especially in cases with limited historical data. This study presents a framework, as presented in Figure 1, based on deep learning and transfer learning. This framework consists of two phases. In the first phase, the rich dataset of db 1 is used to build and train the optimal model for hourly day-ahead PV power forecasting. Then, such a model is transferred to phase II for PV power prediction on db 2. The presented framework leverages the power of deep learning and transfer learning to improve the accuracy of day-ahead PV power forecasting. The deep learning model in phase I is trained using a large and diverse dataset from db 1, allowing for the capturing of complex relationships between various meteorological variables and PV power production. The transfer learning process in phase II fine-tunes the pretrained model from phase I, utilizing the limited data from db 2, and improves its ability to perform accurate predictions for the second PV power farm. The proposed framework provides a practical solution for PV power forecasting in real-world applications, especially in cases where limited historical data are available. As presented in Table 2, these two databases have different statistical behavior; for example, the rated power of db 1 is about 75 [kW], while db 2 rated power is much higher (243 [kW]). However, as presented later, the advantage of transfer learning using neural networks prevents the trained model from working poorly on db 2. In the preprocessing step, each dataset is cleaned and normalized with the z-score formula presented in (1), considering their mean (µ) and standard deviations (σ) of input feature space (x). In phase II, the achieved optimal model in phase I is loaded to be retrained by the normalized dataset of db 2. In the training step in phase II, the earlier layers of the transferred model are frozen to avoid losing the learned patterns from db 1; therefore, their weight values are not updated, and only the weight values of the last layer are updated. The test dataset of db2 is normalized by the mean and standard deviation calculated for the training dataset of db2. The outputs of models are then denormalized with these values to have the same actual scale to evaluate the accuracy and performance of the models. 75 [kW], while db 2 rated power is much higher (243 [kW]). However, as presented later, the advantage of transfer learning using neural networks prevents the trained model from working poorly on db 2. In the preprocessing step, each dataset is cleaned and normalized with the z-score formula presented in (1), considering their mean (μ) and standard deviations (σ) of input feature space (x). x′ = (x − μ)/σ (1) Figure 1. The presented framework of a-day-ahead PV power prediction using transfer learning and deep neural network. Figure 1. The presented framework of a-day-ahead PV power prediction using transfer learning and deep neural network. Results and Discussion The linear model and three state-of-the-art deep learning models-a feedforward neural network, convolutional neural network, and long short-term memory-have been trained based on the framework presented in Figure 1. The models are optimized considering MAE (mean absolute error) as a cost function (2), and Bayesian optimization is employed to select the best hyperparameters for models. Bayesian optimization, a probabilistic method for optimizing hyperparameters, ensures that the models are trained with optimal settings, resulting in improved prediction accuracy: where n is the total number in the sample. The sliding window is used to build the inputoutput pairs for the regression purpose of this study. Each input consists of information for five days (PV power, temperature, and time), and the associated output is a day ahead of the PV power output. In other words, a model will predict PV power production (24 samples, 1 per hour) by looking only at the historical dataset of the last five days (120 input samples for each PV power, temperature, and time per hour). The results of these models are compared and analyzed to evaluate their performance in terms of accuracy and computational efficiency. The comparison provides a comprehensive evaluation of the proposed framework and helps to determine the most suitable model for day-ahead PV power forecasting with transfer learning. In order to evaluate the Forecasting 2023, 5 220 performance of models and compare their accuracy in day-ahead PV power prediction, various evaluation metrics are taken into account, such as mean square error (MSE), mean absolute percentage error (MAPE), root mean square error (RMSE), and weighted mean absolute percentage error (wMAPE), as presented in (3)-(6), respectively: wMAPE determines the average difference between the predicted and actual values by considering the magnitude of the actual values. This metric generates a weighted average of the absolute percentage errors, with the weight determined by the size of the actual values. Hence, wMAPE is particularly apt for evaluating forecasting models in situations where the actual values display substantial fluctuations in magnitude. Training the Base Model In the first modeling phase, the four models are trained using a historical dataset of db 1: 80% of the dataset is used for training and 20% for validation of the models. When there are limited data available, dividing it into only training and validation sets can provide a viable solution. This approach allows for both training the model and evaluating its performance. As shown in Figure 2, the models have good overall performance on both training and validation sets. The models based on CNN and LSTM have comparably better accuracy since their internal structure has been designed to analyze sequence data, allowing them to capture important features in the time-series dataset effectively. As expected, the LSTM model performance in capturing hidden features in a time-series dataset is superior, thanks to the various gates it has to determine which information should be forgotten, remembered, or passed to the next cell. With their promising performance, these base models will serve as the foundation for the transfer learning phase. This study presents MAE and MSE in kW units. The accuracy of the trained models regarding different evaluation metrics is presented in Table 3. The model based on LSTM has the best accuracy in all metrics, with an MAE of 0.052, MSE of 0.015, MAPE of 24%, RMSE of 0.101, and wMAPE of 25.05%. These results show that all models have reasonable accuracy to be used in the second phase of the modeling. Additionally, the results of the accuracy evaluation demonstrate the effectiveness of the proposed framework in improving prediction performance. The outstanding performance of the LSTM model, with a low MAE, MSE, and RMSE, highlights its potential as a robust solution for day-ahead PV power forecasting. The MAPE and wMAPE, which measure the percentage of error in the predictions, further validate the results and show that the models have a high level of accuracy. Transfer Learning The trained models in phase I are transferred to the phase II setting in the second part of modeling. The first three months (the beginning of September 2017 to the end of December 2017) of db 2 are considered a training set, while the dataset regarding January 2018 in this database is considered for the test to evaluate the performance of the models. This study also trains new linear, dense, CNN, and LSTM models considering the training set of db 2 to evaluate performance models transferred from phase I. The last layers of transferred models are also retrained by a training set constructed of db 2. This study investigates the implementing of transfer learning by evaluating the performance of transferred models against newly trained models. The transfer models, retrained transferred models, and new models are all evaluated on the test set of db 2 to assess their ability to generalize to new data. By comparing the results, this study provides insights into the efficacy of transfer learning and highlights the factors that impact its performance. Therefore, the following sets of models are considered: • New model: a set of new models trained by the training set of db 2. These models are developed specifically for the data and requirements of phase II. • Transfer: a set of models transferred from phase I that have undergone minimal modifications. These models are not retrained, but rely on their preexisting knowledge and training to perform predictions in the new environment of phase II. • Trained transfer: a set of models transferred from phase I, but have been further trained using the training set of db 2. These models benefit from the knowledge and training acquired during phase I, but also incorporate new information and adapt to the specifics of the new environment in phase II. As a result, the performance of these models may be improved compared to the transferred models. Figure 3 demonstrates the accuracy of these three sets of models in terms of MAE on the test set of db 2. As is shown, the new linear model has the worst performance due to a lack of enough data for training. In contrast, the transferred models have better performances, especially in the case of the linear model: the accuracy of the transferred linear model improved dramatically. This figure examines the top models by closely evaluating the results from the dense, CNN, and LSTM models. The chart clearly compares the performance of models through a detailed and concise view. Considering only nonlinear models (dense, CNN, LSTM), the new models based on CNN and LSTM work better than the dense model. At the same time, the untrained transferred CNN works better than the untrained transferred LSTM version. Generally, retraining models with the training set of db 2 enhanced the precision of models. The transferred LSTM accuracy is improved more compared to the transferred CNN. The retrained LSTM model has the best performance among all nine presented models. It is important to note that the choice of the model depends on the particular problem and the characteristics of the data. Although LSTM and CNN models may perform better in some cases, dense models may still be appropriate for simpler tasks or smaller datasets. Transfer learning can be beneficial in reducing the amount of training data needed and accelerating the training process. Figure 4 illustrates an hourly day-ahead PV power prediction of a random date in the test set of db 2 based on the dense model. The new dense model failed to predict the day ahead accurately due to the fact that deep learning models need a lot of data to be able to generalize with acceptable precision. Similarly, the transferred dense network, which has yet to be retrained, could not foresee this date well enough. However, retraining this network with information on db 2 improves the accuracy of the model in such a way that its prediction is closer to actual labels than the other two models presented in this figure. Figure 4 illustrates an hourly day-ahead PV power prediction of a random date in the test set of db 2 based on the dense model. The new dense model failed to predict the day ahead accurately due to the fact that deep learning models need a lot of data to be able to generalize with acceptable precision. Similarly, the transferred dense network, which has yet to be retrained, could not foresee this date well enough. However, retraining this network with information on db 2 improves the accuracy of the model in such a way that its prediction is closer to actual labels than the other two models presented in this figure. Figure 5 illustrates an hourly day-ahead PV power prediction of a random date in the test set of db 2 based on the LSTM networks. Similarly to dense networks, the new LSTM model and transferred LSTM model do not accurately predict the day ahead of the sample example. One of the reasons that untrained transferred models have poor performance is the different scales and rated power that the two datasets have. Moreover, the statistical properties and distribution of these datasets are different. Therefore, the performance of transferred models significantly improved after retraining them even with the exiguous training set of db 2. the test set of db 2 based on the dense model. The new dense model failed to predict the day ahead accurately due to the fact that deep learning models need a lot of data to be able to generalize with acceptable precision. Similarly, the transferred dense network, which has yet to be retrained, could not foresee this date well enough. However, retraining this network with information on db 2 improves the accuracy of the model in such a way that its prediction is closer to actual labels than the other two models presented in this figure. Forecasting 2023, 5, 1 224 Figure 5 illustrates an hourly day-ahead PV power prediction of a random date in the test set of db 2 based on the LSTM networks. Similarly to dense networks, the new LSTM model and transferred LSTM model do not accurately predict the day ahead of the sample example. One of the reasons that untrained transferred models have poor performance is the different scales and rated power that the two datasets have. Moreover, the statistical properties and distribution of these datasets are different. Therefore, the performance of transferred models significantly improved after retraining them even with the exiguous training set of db 2. Figure 6 illustrates an hourly day-ahead PV power prediction of a random date in the test set of db 2 based on transfer learning. All the models presented in this figure are retrained with the training set of db 2. Thus, they have superior performance compared to other groups of models, namely, new models and untrained transferred models. Above all, the trained transferred LSTM model shows the best precision, since it is designed to capture hidden patterns in sequences such as time-series datasets. Figure 6 illustrates an hourly day-ahead PV power prediction of a random date in the test set of db 2 based on transfer learning. All the models presented in this figure are retrained with the training set of db 2. Thus, they have superior performance compared to other groups of models, namely, new models and untrained transferred models. Above all, the trained transferred LSTM model shows the best precision, since it is designed to capture hidden patterns in sequences such as time-series datasets. Figure 6 illustrates an hourly day-ahead PV power prediction of a random date in the test set of db 2 based on transfer learning. All the models presented in this figure are retrained with the training set of db 2. Thus, they have superior performance compared to other groups of models, namely, new models and untrained transferred models. Above all, the trained transferred LSTM model shows the best precision, since it is designed to capture hidden patterns in sequences such as time-series datasets. Table 4 presents the accuracy of all 12 models presented in phase II of the proposed framework. The models based on transfer learning perform superiorly in day-ahead PV power prediction compared to new models using the limited available dataset in db 2. For instance, the linear model has inferior performance, while the linear transferred version enhanced PV prediction accuracy dramatically. The models based on the LSTM network generally perform better in most evaluation metrics. However, the trained transfer CNN model works slightly better than the untrained transfer LSTM model. After training untrained transfer models, the LSTM model improves more than the CNN model in MAE, MSE, and RMSE values and has lower values for these metrics. On the other hand, CNN reaches the lowest MAPE-68.25%. In this study, all the modeling was performed in Python programing language on a workstation with an i7-8700K CPU and 16 GB RAM. Various packages and libraries were used for data processing, neural network modeling and optimization, and visualization, including NumPy, Pandas, TensorFlow, and Matplotlib. In transfer learning, a pretrained model is fine-tuned on a new task, allowing the model to leverage its prior knowledge to solve the new problem more efficiently. This can result in improved accuracy as well as reduced training time, as demonstrated in Table 5. Table 5 presents the computational time for training neural networks in phases I and II. The time unit in this table is minutes. Since more data are available in phase I, the computational time is comparably higher than training the original model in phase II. On the other hand, implementing transfer learning improved not only the accuracy but also the training time; for example, the training time for the LSTM model was reduced from 201 to 76 min. Using a pretrained model, the model can quickly adapt to the new task, reducing the time required for training and leveraging the features learned from the previous task, leading to improved performance. Conclusions Deep learning models have achieved reliable and accurate extrapolation and prediction in solar energy prediction in recent years. However, the accuracy of these models strongly depends on the historical dataset size, and the precision of their forecasting is low if not enough data are available. Thus, this study presents a data-driven framework based on transfer learning and deep neural network to predict day-ahead PV power generation for newly installed PV power plants. In the first phase of the framework, four predictive models based on linear, dense, CNN, and LSTM networks are trained and optimized with a rich PV system dataset. Then, these reliable models are transferred to the second phase associated with the newly installed PV power plant in the same region. New models based on previous architecture are trained with the dataset of the newly installed PV power plant. The results show that the transferred models retrained with the new dataset perform better than other models. Among all 12 models presented in this study, the retrained transferred LSTM model has the best accuracy with an MAE of 0.211, MSE of 0.168, MAPE of 74%, RMSE of 0.403, and wMAPE of 32.04%, even though the rated PV production power of two PV plants is quite different.
8,187.2
2023-02-17T00:00:00.000
[ "Engineering", "Environmental Science", "Computer Science" ]
Approximate Analytical Solutions for Strongly Coupled Systems of Singularly Perturbed Convection–Diffusion Problems : This work presents a reliable algorithm to obtain approximate analytical solutions for a strongly coupled system of singularly perturbed convection–diffusion problems, which exhibit a boundary layer at one end. The proposed method involves constructing a zero-order asymptotic approximate solution for the original system. This approximation results in the formation of two systems: a boundary layer system with a known analytical solution and a reduced terminal value system, which is solved analytically using an improved residual power series approach. This approach combines the residual power series method with Pad é approximation and Laplace transformation, resulting in an approximate analytical solution with higher accuracy compared to the conventional residual power series method. In addition, error estimates are extracted, and illustrative examples are provided to demonstrate the accuracy and effectiveness of the method. Introduction Singular perturbation problems (SPPs) arise in diversified areas of applied mathematics and engineering, such as aerodynamics, fluid mechanics, elasticity, optimal control and more [1][2][3][4][5][6][7].It is widely recognized that solutions to such problems exhibit a multiscale nature, characterized by the presence of thin layers where the solution undergoes rapid variations, while outside of these layers, the solution behaves smoothly and changes slowly.Numerous analytical and numerical approaches have been developed to handle and solve SPPs, as examined in the studies conducted by O'Malley [6], Miller et al. [7], Ross et al. [8] and other referenced works . Traditional numerical methods often struggle to provide accurate approximate solution of SPPs due to the presence of thin layer regions.To overcome this challenge, some numerical techniques treat second-order singularly perturbed boundary value problems (SPBVPs) by transforming them into appropriate initial value problems (IVPs).This is because that the numerical treatment of corresponding IVPs is comparatively easier than that of BVPs.Various initial value techniques for solving SPBVPs have been developed in the literature, as discussed in papers [9][10][11][12][13][14][15]. The RPSM is a powerful technique for solving IVPs without linearization, perturbation or discretization [49][50][51][52][53][54][55][56][57][58].It stands apart from classical power series methods, which can be computationally expensive.However, the RPSM and other Taylor series approximation methods face limitations and challenges, particularly when applied to problems that span a substantial time interval or involve high solution gradients, such as those containing boundary layers [52][53][54].To overcome this, an enhanced version of the RPSM called the improved RPSM (IRPSM) is proposed.The IRPSM utilizes Padé approximants, which are known for their superior convergence compared to series approximations [37,45,[56][57][58][59][60].Moreover, by combining the Laplace transform method with Padé approximations [42,52,56], we can obtain more accurate solutions that closely approach exact solutions.This paper presents an efficient algorithm designed to obtain approximate analytical solutions for a complex system of strongly coupled singularly perturbed convectiondiffusion problems.These problems exhibit a boundary layer phenomenon at one end, which poses significant challenges in finding accurate solutions.The proposed method involves constructing a zero-order asymptotic approximate solution for the given system, followed by the analytical solution of the reduced terminal value system (RTVS) using the IRPSM technique.To improve accuracy and convergence properties of RPSMs, the IRPSM combines an RPSM with Padé approximation and Laplace transformation.Compared to the conventional RPSM method, the proposed IRPSM offers higher accuracy and a larger convergence region.This paper also addresses error estimation and demonstrates the effectiveness of the present method through illustrative examples. Description of the Method Consider the following strongly coupled system of two singularly perturbed convectiondiffusion boundary value problems [24,29,30] εy with the following Dirichlet boundary conditions where 0 < ε ≪ 1, are assumed to be sufficiently continuously differentiable functions for x ∈ (a, b), with a ii > 0, a ij ≤ 0, i ̸ = j, and with α i , β i as given constants [24,29,30].Under these conditions, the problem exhibits overlapping boundary layers at x = a with a width of O(ε) [24].The equations in (1) are strongly coupled through their convective terms [22].The analytical behavior of the solution to the SPBVS (1) is influenced by the nature of the boundary conditions, and it has been noted in [8] that the most challenging case arises when these conditions are of the Dirichlet type, like those described in (2).For more details about analytical results such as existence, uniqueness and asymptotic solution approximation, one may refer to the work presented in Refs.[22][23][24][25][26][27][28][29][30][31]. The coupled SPBVS (1)-( 2) finds numerous practical applications in modeling complex physical phenomena [1][2][3][4][5]25].These include the turbulent interaction of waves and currents [2], diffusion processes involving chemical reactions [3], optimal control problems in resistance-capacitor electrical circuits [1], magnetohydrodynamic duct flow prob-lems [4,5,61,62] and more.Obtaining accurate analytical solutions for this SPBVS is crucial as it allows researchers to carefully study how different physical parameters affect the behavior of the solutions.Having these solutions available helps improve our understanding and analysis of the system's dynamics, leading to advancements in scientific knowledge. RPSM for the RTVS (3) In this subsection, the RPSM is introduced for solving the RTVS (3).The RPSM consists of expressing the solution of the RTVS (3) as a power series expansion about the terminal point x = b [49,55].To achieve our goal, we suppose that the solution of the RTVS (3) takes the following form: and can be approximated using the following k th truncated series. Applying the RPSM to the RTVS (3) leads to the following definitions of the k th -residual functions and the ∞ th -residual functions, respectively, as proposed by [49,51,53,55]: where It is easy to see that which is a basic rule in the RPSM. Improved RPSM To enhance the accuracy and expand the convergence region of the series solution obtained from the RPSM, we recommend employing the Laplace-Padé combination approach for the truncated series solution of the RPSM (12).We assume that the solution of the RTVS (3) and its corresponding series expansion (12) satisfy the conditions of Laplace transformability and a Padé approximant [42,52,[56][57][58][59][60]. Padé Approximant at x = b Padé approximants are the best rational approximations of power series [57][58][59][60].The truncated power series solution u k i (x) defined by (12) can by approximated through Padé approximation as follows [37,[56][57][58][59][60]: Let the rational approximation of u k i (x) be the quotient of two polynomials p i,l (x) and q i,m (x) of degrees l and m, respectively, as defined by where The polynomials in (17) are constructed so that u i (x) and u l,m i (x) agree at x = b and their derivatives up to l + m ≤ k.Consequently, the subsequent expression determines the coefficients of P i,l (x) and Q i,m (x) [37,56-60].Multiplying (18) by When the left side of ( 19) is multiplied out and the coefficients of the powers of (x − b) r are set equal to zero for r = 0, 1, . . ., l + m, the result is a system of 2(l + m + 1) linear equations in the 2(l + m + 1) unknown coefficients of P i,l (x) and Q i,m (x).By solving this linear system, we obtain the rational approximation u l,m i (x).Although Padé approximation u l,m i (x) agree with truncated Taylor expansions u l+m i (x) up to order O(l + m), Padé approximation can outperform truncated Taylor expansion because it can accurately represent functions with poles or singularities outside the region of convergence of a Taylor expansion, resulting in more accurate approximation and a larger convergence region [42,52,[56][57][58][59][60]. Laplace-Padé Algorithm The Laplace-Padé l m algorithm can be described as follows: Step 1. Begin by replacing x − b with t in the power series (12) and then apply Laplace transformation, resulting in a transformed series U i (s), i = 1, 2. Step 2. Substitute s with 1/τ in the transformed series U i (s). Step 3. Convert the resulting series into a Padé approximant u l,m i (τ). Step 5. Lastly, apply the inverse Laplace transform and replace t with x − b to obtain the approximate augmented solution u i,ap (x), i = 1, 2. Finally, the approximate analytical solution of the SPBVS ( 1)-( 2) can be expressed by Error Estimate of the Method The numerical error of the present method has two sources: one from the asymptotic approximation and the other from the analytical approximation by the IRPSM. 1) and the approximate analytical solu- tion Theorem 2. The solution T in ( 20) satisfy the inequality. Proof.We have and where And since Padé approximant (x) has a bounded error given by [57-60] → then, from Theorem 1 and the above bounded errors, we have It is worth highlighting that the IRPSM often provides the exact solution for the RTVS (3) and eliminates the second term in the error inequality (21).Conversely, when the asymptotic boundary layer solution (9) accurately represents the boundary layer solution of the original SPBVS ( 1)-( 2), the first term in the error inequality ( 21) is eliminated.In such a case, the remaining error becomes independent of the perturbation parameter ε and is solely determined by the methods used to solve the RTVS (3).□ Numerical Results This section provides illustrative examples that demonstrate the method's accuracy and efficiency in solving the considered problems.The selected examples have been carefully chosen from the literature and have known exact solutions, allowing for a comprehensive comparison.They include both two-and three-dimensional linear examples with constant or variable coefficients.Furthermore, these examples involve non-homogeneous source terms that can be constant, exponential or trigonometric in nature.Additionally, we have considered examples with both Dirichlet and Robin boundary conditions, and we have included cases with known or unknown exact solutions.These selections aim to facilitate a thorough analysis of the proposed method and provide a comprehensive understanding of its applicability and effectiveness.Throughout this section, we will refer to the combination of the asymptotic approximation and RPSM as A-RPSM, and the combination of the asymptotic approximation and IRPSM as A-IRPSM.All symbolic calculations were conducted using MAPLE 14, while numerical simulations were performed using MATLAB 2017b. Example 1.Consider the following SPBVS [24,32] with Dirichlet boundary conditions and where f 1 (x) and f 2 (x) are given by The exact solution to SPBVS ( 22) is given by The RTVS of ( 22) is given by When applying the RPSM with the 10th order to the RTVS (23), the result series solution is given by 11 , When applying the Laplace-Padé [5/5] algorithm to (24), this results in u To portray the solution behavior inside the boundary layer, Figure 1 presents the profiles of the exact solution (solid line) and the approximate solution (dotted marked line) in Example 1 over (left) the problem domain [0, 1] and (right) a boundary layer region for different values of ε.This shows that the solution exhibits a high gradient within the boundary layer, which poses a challenge for classical numerical methods to accurately capture without special treatment. To portray the solution behavior inside the boundary layer, Figure 1 presents the profiles of the exact solution (solid line) and the approximate solution (dotted marked line) in Example 1 over (left) the problem domain 0, 1 and (right) a boundary layer region for different values of .This shows that the solution exhibits a high gradient within the boundary layer, which poses a challenge for classical numerical methods to accurately capture without special treatment. , ω = 0 : K, where K represents a suitable number of grid points chosen for the purpose of comparison.The error is computed for the approximate solution (26) at various values of ε.The results depicted in Figure 2 demonstrate that as the perturbation parameter decreases, the accuracy of the approximate solution improves.Indeed, our results indicate that for ε ≤ 2 −5 , the maximum error in the approximate solution remains in the order O 10 −15 . Mathematics 2024, 12, x FOR PEER REVIEW 9 of 24 approximate solution improves.Indeed, our results indicate that for ≤ 2 , the maximum error in the approximate solution remains in the order (10 ).Table 1 presents the maximum error, denoted as = ‖ ( ) ‖ ∞ , for both the A-RPSM and A-IRPSM in solving Example 1 at different values of ε and for = 10, while Table 2 presents the maximum error at = 10 and for different values of .The results from both tables confirm that as ε decreases, the accuracy of both methods increases, particularly when the first error term in (21) dominates due to larger values.Furthermore, the numerical results support the notion that increasing the number of series terms improves the accuracy of both methods, especially when the first error term is negligible, due to smaller values, and the second error term becomes dominant, highlighting a significant difference in accuracy between the RPSM and IRPSM, especially with increasing .The results in Tables 1 and 2 confirm that the A-IRPSM exhibits higher accuracy and demonstrates greater improvement in accuracy when compared to the A-RPSM.Table 1 presents the maximum error, denoted as E max = ∥ Error( x ω ) ∥ ∞ , for both the A-RPSM and A-IRPSM in solving Example 1 at different values of ε and for k = 10, while Table 2 presents the maximum error at ε = 10 −9 and for different values of k.The results from both tables confirm that as ε decreases, the accuracy of both methods increases, particularly when the first error term in (21) dominates due to larger ε values.Furthermore, the numerical results support the notion that increasing the number of series terms k improves the accuracy of both methods, especially when the first error term is negligible, due to smaller ε values, and the second error term becomes dominant, highlighting a significant difference in accuracy between the RPSM and IRPSM, especially with increasing k.The results in Tables 1 and 2 confirm that the A-IRPSM exhibits higher accuracy and demonstrates greater improvement in accuracy when compared to the A-RPSM.Table 3 provides a comparison of the maximum error results for the A-RPSM, A-IRPSM and two other methods, namely a parameter-uniform finite difference method [32] and a spectral collocation method [24].The comparison is conducted for the numerical results obtained in [24,32] at ε = 10 −8 and various numbers of grid points N, and the results of the A-RPSM and A-IRPSM at k = 10.The results in Table 3 confirm that the A-IRPSM achieves significantly higher accuracy compared to the results of the A-RPSM and those presented in [24,32], even for the large number of grid points employed in [24,32] for accuracy improvement.This demonstrates the efficiency of the A-IRPSM in achieving accurate results with reduced computational effort. with boundary conditions whose exact solution is given by where The RTVS of ( 27) is given by When applying the RPSM with the 10th order to the RTVS (28), the result series solution is given by ( Applying the Laplace-Padé [5/5] The results in Figure 4 and Table 4 show that as the perturbation parameter decreases, the A-RPSM, A-IRPSM and the initial value method [13] exhibit an increase in accuracy.Furthermore, the accuracy of the A-IRPSM shows a greater improvement compared to the A-RPSM and the method in [13].Indeed, similar results were obtained when comparing our results with the results presented in [13] for the remaining examples in that study.The results in Figure 4 and Table 4 show that as the perturbation parameter decreases, the A-RPSM, A-IRPSM and the initial value method [13] exhibit an increase in accuracy.Furthermore, the accuracy of the A-IRPSM shows a greater improvement compared to the A-RPSM and the method in [13].Indeed, similar results were obtained when comparing our results with the results presented in [13] for the remaining examples in that study.The results in Figure 4 and Table 4 show that as the perturbation parameter decreases, the A-RPSM, A-IRPSM and the initial value method [13] exhibit an increase in accuracy.Furthermore, the accuracy of the A-IRPSM shows a greater improvement compared to the A-RPSM and the method in [13].Indeed, similar results were obtained when comparing our results with the results presented in [13] for the remaining examples in that study. The results in Table 5 validate that increasing the value of k leads to improved accuracy for both methods, with the A-IRPSM outperforming the A-RPSM in terms of higher accuracy and demonstrating a greater improvement in accuracy. The present method can be extended to problems with specific Robin boundary conditions of the form y i (a) , where ∝ i and ϑ i are constants.To illustrate this, let us consider the following example. with Robin boundary conditions and where f 1 (x) and f 2 (x) are given by The exact solution of the SPBVS ( 32) is given by The RTVS of ( 32) is given by Applying the RPSM with the 10th order to the RTVS (34) results in ( Applying the Laplace-Padé [5/5] algorithm to (35) results in Thus, we have an approximate analytical solution to the SPBVS (32), given by Figure 5 presents the profiles of the exact solution (solid line) and the approximate solution (dotted marked line) in Example 3 over (left) the problem domain [0, 1] and (right) a boundary layer region for different values of ε. Figure 6 illustrates the maximum pointwise error of the solution (36) across different values of ε.Moreover, Table 6 presents the maximum error in Example 3 with the A-RPSM and A-IRPSM for different values of ε and at k = 10.As mentioned in Section 3.3, and from ( 33) and ( 36), we note that the asymptotic approximation yields the exact solution of the boundary layer of problem (32).Consequently, the remaining error is unaffected by the perturbation parameter and is solely determined by the method employed to solve the RTVS.( Applying the Laplace-Padé [5/5] algorithm to (35) Thus, we have an approximate analytical solution to the SPBVS (32), given by Figure 5 presents the profiles of the exact solution (solid line) and the approximate solution (dotted marked line) in Example 3 over (left) the problem domain 0, 1 and (right) a boundary layer region for different values of . Figure 6 illustrates the maximum pointwise error of the solution (36) across different values of ε .Moreover, Table 6 pre- sents the maximum error in Example 3 with the A-RPSM and A-IRPSM for different values of and at = 10.As mentioned in Section 3.3, and from ( 33) and ( 36), we note that the asymptotic approximation yields the exact solution of the boundary layer of problem (32).Consequently, the remaining error is unaffected by the perturbation parameter and is solely determined by the method employed to solve the RTVS.Notably, the maximum pointwise error in the approximate solution (36) remains in the order of 7. The results in Table 7 corroborate the results from Tables 2 and 5, confirming that increasing the value of leads to enhanced accuracy for both methods.Additionally, the results validate that the A-IRPSM demonstrates a greater improvement in accuracy compared to the A-RPSM.Notably, the maximum pointwise error in the approximate solution (36) remains in the order of 7. The results in Table 7 corroborate the results from Tables 2 and 5, confirming that increasing the value of leads to enhanced accuracy for both methods.Additionally, the results validate that the A-IRPSM demonstrates a greater improvement in accuracy compared to the A-RPSM.Notably, the maximum pointwise error in the approximate solution (36) remains in the order of O(10 −15 ) even at ε = 1.Therefore, the obtained approximate solution serves as an exceptional representation of the exact solution. The maximum error of the A-RPSM and A-IRPSM in solving Example 3 for different values of k and at ε = 10 −9 is presented in Table 7.The results in Table 7 corroborate the results from Tables 2 and 5, confirming that increasing the value of k leads to enhanced accuracy for both methods.Additionally, the results validate that the A-IRPSM demonstrates a greater improvement in accuracy compared to the A-RPSM. with boundary conditions The exact solution of the SPBVS ( 37) is not unavailable.The RTVS of ( 37) is given by Applying the RPSM with the 10th order to the RTVS (38) results in Applying the Laplace-Padé [5/5] algorithm to (39) results in Due to the unavailability of the exact solution to the SPBVS (37), we adopted the numerical solution obtained using the bvp4c built-in function in MATLAB [63] with Abstol and Reltol values set to 10 −10 as our reference solution for this test problem.To handle the challenges posed by steep gradients in the SPBVS, we augmented the bvp4c function with a continuation technique that allows for the solution of a BVP via a continuous transformation from an easier problem to the desired problem [64,65]. Figure 7 presents the profiles of the reference solution (solid line) and the approximate solution (dotted marked line) in Example 4 over (left) the problem domain [0, 1] and (right) a boundary layer region for different values of ε. Figure 8 illustrates the distribution of the maximum pointwise error for the approximate solution (41) Due to the unavailability of the exact solution to the SPBVS (37), we adopted the numerical solution obtained using the bvp4c built-in function in MATLAB [63] with Abstol and Reltol values set to 10 as our reference solution for this test problem.To handle the challenges posed by steep gradients in the SPBVS, we augmented the bvp4c function with a continuation technique that allows for the solution of a BVP via a continuous transformation from an easier problem to the desired problem [64,65].The results in Table 8 corroborate the results from Tables 2, 5 and 7, confirming that increasing the value of leads to enhanced accuracy for both methods.Additionally, the results validate that the A-IRPSM demonstrates a greater improvement in accuracy compared to the A-RPSM.This method can be extended to higher dimensions of SPBVS, as demonstrated by the following three-dimensional example. Example 5. Consider the following system of the SPBVS [22,26,27] with boundary conditions The exact solution of the SPBVS ( 42) is given by The results in Table 8 corroborate the results from Tables 2, 5 and 7, confirming that increasing the value of k leads to enhanced accuracy for both methods.Additionally, the results validate that the A-IRPSM demonstrates a greater improvement in accuracy compared to the A-RPSM.This method can be extended to higher dimensions of SPBVS, as demonstrated by the following three-dimensional example. For this example, as the solution of the RTVS ( 44) is a polynomial, both the A-RPSM and A-IRPSM methods yield the exact same polynomial solution.Consequently, these methods produce the same approximate solution (49) for the given problem (42). Figure 9 shows the solution profile of Example 5 over (left) the problem domain [0, 1] and (right) a boundary layer region for different values of ε. Conclusions In this paper, an efficient method for solving strongly coupled singularly perturbed convection-diffusion systems is presented.This method utilizes the reduced terminal value system and the boundary layer system, which has a known exact solution, to derive an approximate analytical solution for the original system.These systems have practical applications, and an approximate analytical solution is needed to gain insights into their behavior and analyze practical scenarios considering different physical parameters.The proposed method combines the RPSM, Padé approximation and Laplace transformation, resulting in a more accurate solution compared to traditional RPSM.The accuracy of the method is validated through error estimates, illustrative examples and comparisons with the existing literature.The numerical results demonstrate that decreasing the perturbation parameter or increasing the number of considered series terms improves the accuracy of this method, in agreement with the theoretical results presented in this paper.Furthermore, the A-IRPSM exhibits higher accuracy and greater improvement compared to the A-RPSM and other methods discussed in the literature.This method also demonstrates its reliability by yielding exact solutions for specific solved examples, highlighting its accuracy and trustworthiness.Additionally, the capability of extending this method to higher-dimensional singularly perturbed convection-diffusion systems is demonstrated through a three-dimensional test problem.The results clearly indicate the high accuracy of the method and its ability to provide continuous approximate or exact solutions for such systems.Future work will focus on extending this method to nonlinear problems and other types of singularly perturbed systems. Figure 1 . Figure 1.Exact solution (solid line) and approximate solution (dotted marked line) profiles of Example 1 at different values of : (left) global region, (right) boundary region. Figure 2 Figure 1 . Figure2illustrates the distribution of the maximum pointwise error, denoted as ‖ .‖ = ( ) = ⃗( ) − ⃗ ( ) , ∈ , , = 0: , where represents a suitable number of grid points chosen for the purpose of comparison.The error is computed for the approximate solution(26) at various values of .The results depicted in Figure2demonstrate that as the perturbation parameter decreases, the accuracy of the Figure 2 Figure 2 illustrates the distribution of the maximum pointwise error, denoted as Figure 2 . Figure 2. Maximum pointwise error for Example 1 with A-IRPSM at different values of . Figure 2 . Figure 2. Maximum pointwise error for Example 1 with A-IRPSM at different values of ε. Figure 3 Figure 3 presents the profiles of the exact solution (solid line) and the approximate solution (dotted marked line) in Example 2 over (left) the problem domain [0, 1] and (right) a boundary layer region for different values of ε. Figure 4 illustrates the distribution of the maximum pointwise error for the approximate solution (31) at various values of ε. ( 30 )Figure 3 Figure 3 presents the profiles of the exact solution (solid line) and the approximate solution (dotted marked line) in Example 2 over (left) the problem domain 0, 1 and (right) a boundary layer region for different values of .Figure 4 illustrates the distribution of the maximum pointwise error for the approximate solution (31) at various values of . Figure 4 Figure 3 presents the profiles of the exact solution (solid line) and the approximate solution (dotted marked line) in Example 2 over (left) the problem domain 0, 1 and (right) a boundary layer region for different values of .Figure 4 illustrates the distribution of the maximum pointwise error for the approximate solution (31) at various values of . Figure 3 . Figure 3. Exact solution (solid line) and approximate solution (dotted marked line) profiles of Example 2 at different values of : (left) global region, (right) boundary region. Figure 4 . Figure 4. Maximum pointwise error for Example 2 with A-IRPSM at different values of . Figure 3 . Figure 3. Exact solution (solid line) and approximate solution (dotted marked line) profiles of Example 2 at different values of ε: (left) global region, (right) boundary region. Figure 3 . Figure 3. Exact solution (solid line) and approximate solution (dotted marked line) profiles of Example 2 at different values of : (left) global region, (right) boundary region. Figure 4 . Figure 4. Maximum pointwise error for Example 2 with A-IRPSM at different values of . Figure 4 . Figure 4. Maximum pointwise error for Example 2 with A-IRPSM at different values of ε. Figure 5 . Figure 5. Exact solution (solid line) and approximate solution (dotted marked line) profiles of Example 3 at different values of : (left) global region, (right) boundary region. Figure 6 . Figure 6.Maximum pointwise error for Example 3 with A-IRPSM at different values of . Therefore, the obtained approximate solution serves as an exceptional representation of the exact solution.The maximum error of the A-RPSM and A-IRPSM in solving Example 3 for different values of and at = 10 is presented in Table Figure 5 . 24 Figure 5 . Figure 5. Exact solution (solid line) and approximate solution (dotted marked line) profiles of Example 3 at different values of ε: (left) global region, (right) boundary region. Figure 6 . Figure 6.Maximum pointwise error for Example 3 with A-IRPSM at different values of . Therefore, the obtained approximate solution serves as an exceptional representation of the exact solution.The maximum error of the A-RPSM and A-IRPSM in solving Example 3 for different values of and at = 10 is presented in Table Figure 6 .Table 6 . Figure 6.Maximum pointwise error for Example 3 with A-IRPSM at different values of ε. Figure 7 presents the profiles of the reference solution (solid line) and the approximate solution (dotted marked line) in Example 4 over (left) the problem domain 0, 1 and (right) a boundary layer region for different values of . Figure 8 illustrates the distribution of the maximum pointwise error for the approximate solution(41) at various values of . Figure 7 . Figure 7. Exact solution (solid line) and approximate solution (dotted marked line) profiles of Example 4 at different values of ε: (left) global region, (right) boundary region. Figure 7 . Figure 7. Exact solution (solid line) and approximate solution (dotted marked line) profiles of Example 4 at different values of : (left) global region, (right) boundary region. Figure 8 . Figure 8. Maximum pointwise error for Example 4 with A-IRPSM at different values of . Figure 8 . Figure 8. Maximum pointwise error for Example 4 with A-IRPSM at different values of ε. Figure 9 . Figure 9. Exact solution (solid line) and approximate solution (dotted marked line) profiles of Example 5 at different values of : (left) global region, (right) boundary region. Figure 9 . Figure 9. Exact solution (solid line) and approximate solution (dotted marked line) profiles of Example 5 at different values of ε: (left) global region, (right) boundary region. Table 1 . Numerical results with A-RPSM and A-IRPSM for Example 1 at = 10. Table 2 . Numerical results with A-RPSM and A-IRPSM for Example 1 at = 10 . Table 1 . Numerical results with A-RPSM and A-IRPSM for Example 1 at k = 10. at various values of ε. Table 8 . Maximum error with A-RPSM and A-IRPSM for Example 4. Table 8 . Maximum error E max with A-RPSM and A-IRPSM for Example 4.
6,951
2024-01-15T00:00:00.000
[ "Mathematics", "Engineering" ]
Biothermodynamic Assay of Coptis-Evodia Herb Couples Objective. To illustrate the difference in cold/hot natural properties and therapeutic effect of coptis-evodia herb couples by using cold/hot plate differentiating technology and microcalorimetry combined with material basis analysis in vivo and in vitro. It showed that animal retention ratio in hot pad significantly decreased along with the decrease in coptis proportion in coptis-evodia herb couples. In addition, Zuojin wan markedly reduced the retention ratio of gastritis mice in the hot pad, while Fanzuojin wan displayed an opposite result. Further, Mg2+-ATPase, Ca2+-ATPase, and T-AOC activity significantly weakened in coptis-treated group in the livers of the mice. In the gastric cells from the gastritis mice, Fanzuojin wan remarkably increased calorific value for growth and metabolism, while Zuojin wan significantly reduced the calorigenic effect. It suggested that the changes in the major chemical compositions (especially alkaloids) were the material base-induced transformation between “cold” and “hot” syndromes. The material basis which affected the transformation between “cold” and “hot” syndromes might be X2, X3, X4, X8, epiberberine hydrochloride, jatrorrhizine hydrochloride, coptisine sulphate, palmatine hydrochloride, and berberine hydrochloride. The CHPD combined with microcalorimetry technology is a good method to determine the differences in the “cold” and “hot” natural properties of coptis-evodia herb couples. Introduction The "cold" (Han) or "hot" (Re) property of traditional Chinese medicine is determined by its therapeutic effect on "cold" or "hot" syndrome which involves physiological, biochemical, metabolic, and pathological changes [1,2]. A study has shown that "cold" medicines significantly suppress thyroid, adrenal, ovaries, and other endocrine systems, while "hot" drugs enhance the functions of these endocrine systems in animal experiments [3]. A kind of "cold" drug containing anemarrhena and gypsum was successfully used to copy a "cold" rat model, and a "hot" drug containing aconite, ginger, Codonopsis, and Astragalus cured the "cold" symptom. Some studies suggested that luteinizing hormone, thyroid-stimulating hormone, and adrenocorticotropic hormone levels significantly elevated in the "cold" animal model, while the "hot" drugs attenuated the increased hormone levels [4][5][6]. Clinical studies have shown low basal metabolism in patients with "cold" syndrome and high metabolism in the "hot" patients. It was proposed that that "cold" or "hot" syndrome might be a typical reaction of the body and the "cold"/"hot" drugs could change the current state [7]. Since mitochondria are major organelles providing cells with energy, succinic dehydrogenase (SDH) activity increases in the status of being "hot," suggesting a positive correlation between "hot" symptom and body energy metabolism. As expected, after being treated with the "cold" drugs, the SDH activity was significantly attenuated, facilitating the recovery of mitochondrial respiration in liver. It has been well documented that SDH, adenosine triphosphatase (ATPase), and adenosine kinase (ADK) activity is significantly enhanced in "hot" rats compared with the "cold" ones. Coptis-evodia herb couples including Zuojin wan, Ganlu san, Zhuyu wan, and Fanzuojin wan are composed according to different proportions. It is well known that Coptis-evodia herb couples mainly are used to treat gastrointestinal diseases. Gastric acid secretion inhibition, affecting gastrointestinal motility, and analgesic and anti-inflammatory properties have been widely reported in Zuojin wan and its similar formulae. Further, c-fos and corticotropin releasing hormone (CRH) mRNAs markedly are downregulated after Zuojin 2 Evidence-Based Complementary and Alternative Medicine wan treatment. And Zuojin wan could elevate the gastric PH value and ulcer index (UI) [8,9]. It was confirmed that Zuojin wan or Ganlu san efficiently eliminated "hot" or aggregated "cold" symptom. Furthermore, Zuojin wan and Ganlu san significantly attenuated Na + -K + -ATPase and Ca 2+ -Mg 2+ -ATPase activity in rat's erythrocyte membrane and reduced serum interleukin-6 (IL-6) and stimulating hormone (TSH) levels. In the present study, we used "cold"/"hot" plate differentiating (CHPD) technology combined with biothermodynamics analysis to determine the transformation of "hot" and "cold" syndromes in vivo and in vitro in the absence or in the presence of coptis-evodia herb couples. Further, the components for the activity materials were also analyzed by ultra performance liquid chromatography (UPLC) fingerprints [10][11][12]. Based on this, we expect to build an efficient method to determine the "hot" and "cold" properties of traditional Chinese medicines coptis-evodia herb couples. The herbs were washed, dried, and crushed to powder: weighed Rhizoma Coptidis and Fructus Evodiae powder, respectively. Then weighted different proportions (Rhizoma Coptidis and Fructus Evodiae) of Zuojin wan similar formulae are prepared according to the following criteria: 10 : 1 and 6 : 1 for Zuojin wan, 4 : 1 and 2 : 1 for Ganlu san, 1 : 1 for Zhuyu wan, 1 : 2, 1 : 4, and 1 : 6 for Fanzuojin wan. The weight of each sample was 210 g. The samples were added in 10 volumes of deionized water at 40 ∘ C for 30 min and extracted for three times (2 h for 10 times amount of water, 1 h for 8 times amount of water, and 0.5 h for 6 times amount of water). After that, the extracts were combined and concentrated under reduced pressure at 75 ∘ C. Finally, the extracts were dried under vacuum drying conditions at 50 ∘ C until constant weight. Experimental Animals. Specific pathogen-free (SPF) KM mice weighing 18-22 g were provided by the Experimental Animal Center of Academy of Military Medical Sciences (Beijing, China). The animal room was maintained at 22 ± 2 ∘ C and 30%-60% relative humidity. The rats were given free access to food and water. All the experiments were conducted in accordance with the national guidelines for the care and use of laboratory animals. This study was approved by the Ethnic Committee of Affiliated Hospital of Kunming University of Science and Technology (Kunming, China). Temperature Tropism Assay in Normal Mice. The normal mice were divided into a vehicle group, a coptis group (5.0 g/kg), a Zuojin wan group (5.0 g/kg), a Ganlu san group (5.0 g/kg), a Zhuyu group (5.0 g/kg), a Fanzuojin wan group (5.0 g/kg), and a Zhuyu wan group (5.0 g/kg) (6 animals in each group). The animals were administrated for 7 days and once a day. 30 min after the administration, the animals were placed in a temperature tropism intelligent monitoring instrument (patent number ZL2008200004444.2) (Figure 1) [13][14][15][16]. At room temperature of 20 ± 2 ∘ C, the cold or hot pad was set at 25 ∘ C or 40 ∘ C, respectively. When the actual temperature is reached, 6 mice (labeled with 1 → 6) were placed within different channels of the instrument. The mice were tracked by the software camera (15 frames per second). The experiment was repeated for 7 days and once a day. Retention ratio is residence time in hot pad/the total monitoring time * 100%. To establish gastric "cold" symptom, the animals were given cold water (4 ∘ C) for three days and once a day. After fasting for 24 h, the mice were administrated with 4 ∘ C NaOH (0.3 mol/L, 10 mL/kg). Then they received various treatments for 7 days and once a day. To copy gastric "hot" symptom, the animals were administrated with 10% ethanol pepper solution (20 mL/kg) for 3 days and once a day. 10% pepperethanol solution was prepared as follows. Pepper oil was purchased from Tashuifang Co., Ltd. 3 mL of pepper oil was then dissolved by 10% ethanol solution to reach a total volume Evidence-Based Complementary and Alternative Medicine 3 of 30 mL. After that, the animals received various treatments for 7 days and once a day. General Status of Animals. During the experiment, the body, food intake, water intake, and oxygen consumption of the mice were recorded for 7 days successively. Biochemical Assay of Live Tissue. Prepared 10% liver homogenates and performed biochemical assay of Na + -K + , Mg 2+ , Ca 2+ ATPase, T-AOC, and SOD are in accordance with the instructions of the manufacturers by using ultraviolet spectrophotometer. [17]. The mice model with gastric "cold" or "hot" symptom was established as described previously. Briefly, the mice were sacrificed by cervical dislocation. Then the peritoneal cavity was open and the stomach was removed and placed in cold normal saline. The stomach was cut off along with the greater curvature. Food debris and blood were washed from the stomach. After that, the cells passing a sheet of cell sieve were cultured in prepared Dulbecco's modified eagle's medium (DMEM). Microcalorimetry Assay of Mouse Gastric Cells Under aseptic conditions, DMEM containing gastric cells (appropriate gastric cells from 2 mice weighing 18-22 g) were added to each ampule and then Zuojin wan and its similar formulae were added, respectively. Then the ampule was sealed and placed in a microcalorimetry instrument at a constant temperature of 37 ∘ C. The thermogram was not recorded until the curve returns to baseline. Maximum power output ( ) is calculated in accordance with the following formula: = 0 ( − 0 ) ( presents cell growth rate constant during the exponential growth phase and 0 presents the initiation time). [12]. ACQUITY UPLC BEH C18 column (50 mm × 2.1 mm, 1.7 m) (Waters, Milford, USA) was used for this part experiment. The column temperature was set at 22 ± 0.5 ∘ C. The mobile phase was 0.05% phosphoric acid/water (v/v)acetonitrile. The measurement wave was 270 nm. The injection volume was 1 L. Number of theoretical plates was more than 3000 according to berberine hydrochloride. Zuojin Wan Increases the Retention Ratio of Normal Mice in Hot Pad. It is shown that the retention ratio in the hot pad decreased along with the decrease in coptis proportion in coptis-evodia herb couples. The order of the retention ratio in the hot pad was coptis > Zuojin wan > Ganlu san ≈ Zhuyu wan > Fanzuojin wan > evodia. The results demonstrated that Zuojin wan had a significant thermotaxis effect, while Fanzuojin wan and evodia markedly increased the tendency to being cold in normal mice ( Figure 2). General Status of Normal Mice after Being Treated with Coptis-Evodia Herb Couples. The body weight, food intake, and water intake significantly increased in the normal mice treated with Zhuyu wan and Fanzuojin wan, while the oxygen consumption markedly reduced in Zuojin wan-treated mice ( Figure 3). Furthermore, the difference was enlarged along with the prolongation of the time. In addition, Ganlu san and evodia had no similar effects. Changes in Related Biochemical Parameters in the Liver of Normal Mice Treated with Coptis-Evodia Herb Couples. The Mg 2+ -ATPase, Ca 2+ -ATPase, and T-AOC activity was significantly weakened in coptis-treated group, while being enhanced in Evodia-, Fanzuojin wan-, and Zhuyu wantreated groups. Compared with the control, Fanzuojin wan significantly attenuated the SOD activity in the liver, while Zhuyu wan remarkably enhanced the SOD activity (Table 1). The Establishment of Mice Model with Gastric "Cold" or "Hot" Symptom. The normal mouse stomach was smooth and pink. However, red and white gastric mucosa, red or purple surface damage, and clear vascular permeability were observed in the mice with gastric "cold" symptom. Meanwhile, significant congestion and ulcers were seen in the mice with "hot" symptom ( Figure 4). Zuojin Wan Increases Retention Ratio of Mice with Gastric "Hot" Symptom, While Fanzuojin Wan Increases Retention Ratio of Mice with Gastric "Cold" Symptom in Hot Pad. The results revealed that the retention ratio in hot pad was significantly higher in the mice with gastric "cold" symptom than that in the normal controls. In the mice with gastric "cold" symptom, Ganlu san, Zhuyu wan, and Fanzuojin wan significantly reduced the retention ratio in hot pad at 4 day ( Figure 5(a)). In the gastric "hot" model, Zuojin wan markedly increased the animal retention ratio in hot pad compared with the model group. And the effect was gradually enhanced along with the prolongation of the time ( Figure 5(b)). General Status of Gastric "Cold" or "Hot" Mice after Being Treated with Coptis-Evodia Herb Couples. The water intake significantly increased and the water intake remarkably decreased in mice with gastric "hot" symptom. The body weight was slowly increased in both animal models ( Figure 6). Zuojin wan significantly increased food intake and body weight and decreased water intake in the mice with gastric "hot" symptom, while Fanzuojin wan markedly increased food intake, body weight, and water intake in the mice with gastric "cold" symptom ( Figure 6). The oxygen consumption was markedly reduced in Zuojin wan-treated gastric "cold" mice, while being increased in Fanzuojin wan-treated animals ( Figure 6). Changes in Related Biochemical Parameters in the Liver of Gastric "Cold" or "Hot" Mice Treated with Coptis-Evodia Herb Couples. The Na + -K + -, Mg 2+ -, Ca 2+ -ATPase, T-AOC, and SOD activity was significantly weakened in the gastric "cold" mice but enhanced in the gastric "hot" mice ( Table 2). Table 1: Related biochemical parameters in the liver of the normal mice after coptis-evodia herb couples treatment ( ± , = 6). Fanzuojin wan and Ganlu san significantly reduced the T-AOC and SOD activity of the mice with gastric "hot" symptom. And Zhuyu wan and Fanzuojin wan markedly enhanced the T-AOC and SOD activity of the gastric "cold" mice ( Table 2). Figure 7 and maximum power output ( ) data were present in Table 3. It was found that the was gradually increased with the increase in evodia proportion. The in FZJ-treated normal mouse gastric cell was higher than that in ZJW-treated cell. Thermogenic Assay of "Cold" or "Hot" Symptom Mouse Gastric Cells Treated with Coptis-Evodia Herb Couples. The thermogenic curve was shown in Figure 8 and maximum power output ( ) data were present in Table 4. It revealed that Fanzuojin wan increased in "cold" symptom mouse gastric cells, whereas Zuojin wan decreased in "hot" symptom mouse gastric cells. UPLC Assay of the Material Basis in Coptis-Evodia Herb Couples. According to the information provided by the HPLC fingerprints of coptis-evodia herb couples, we compared the fingerprints and confirmed 23 specific fingerprint peaks. Among them, the area of 14 peaks was about 90% of the total area. Thus, these 14 peaks were confirmed to be specific peaks (Figure 9). For the area of berberine hydrochloride peak (wave 20#) was largest in the total area (10% or more) with the highest peak height and relative stability, it was selected for the control wave. Under the above chromatographic conditions, 8 Evidence-Based Complementary and Alternative Medicine Table 2: Changes in related biochemical parameters in the liver of the mice with gastric "cold" or "hot" symptom after being treated with coptis-evodia herb couples ( ± , = 6). Group Na + -K + -ATPase ( mol⋅mg −1 ⋅h −1 ) 1 L of coptis-evodia herb couples solutions was injected to record the chromatograms ( Figure 10). After that, we used external standard method to calculate alkaloids dissolution rate per unit area of a single herb in berberine, evodia, and Zuojin wan and its similar formulae (Table 5). Among four similar formulae, the resemblance of Zuojin wan, Ganlu san, Zhuyu wan, and Fanzuojin wan to control fingerprint was gradually decreased, suggesting differences in chemical compositions among the four formulae. Discussion To the best of our knowledge, it is first time for us to investigate the "cold" and "hot" properties of coptis-evodia herb couples by biothermodynamics method. Biothermodynamics is a science focusing on energy transfer and thermal variations in metabolic process of life systems [18,19]. Its principal idea is displaying the energy metabolism process by the means of thermodynamic functions [19]. Energy metabolism in a living body system will change in the presence of traditional Chinese medicine (TCM). Thus, it is necessary to obtain the changes in thermodynamic parameters to reflect the differences in biological activity of various reagents measured. Microcalorimetry (MCM) is an important method for biothermodynamics. Currently, it is becoming a key approach to studying body metabolism characteristics and rules, preliminary activity screening of drugs, drug interactions, and taxonomic identification [20][21][22][23]. In the present study, we used MCM in the cold/hot properties of TCM in the early studies [24]. In this study, we selected coptis-evodia herb couples for the biothermodynamic assay. Coptis-evodia herb couples are consisted of berberine and evodia. The therapeutic effect is completely different if the proportion changes. Secondly, the active ingredients (mainly alkaloids) are identified. Thirdly, the two herbs berberine and evodia have distinct "cold" and "hot" properties. Thus, this formula is typical and representative in some degree. In the temperature tropism experiment, we found that animal retention ratio in hot pad decreased along with the decrease in coptis proportion in coptis-evodia herb couples. In the mice with gastric "cold" symptom, Ganlu san, Zhuyu wan, and Fanzuojin wan significantly reduced the retention ratio in hot pad, while in the gastric "hot" model Zuojin wan markedly increased the animal retention ratio in hot pad compared with the model group. Meanwhile, related biochemical parameters changed in the normal, "cold" symptom, and "hot" symptom mice. These above results reflected differences in "cold" and "hot" properties of Zuojin wan and its similar formulae. In vitro study revealed thermogenic changes in coptisevodia herb couples. It showed that coptis-evodia herb couples had different effects on the growth and metabolism of the gastric cells. Zuojin wan reduced the heat production in the gastric cells from the mice with gastric "hot" symptom, while Fanzuojin wan increased the heat production in the gastric "cold" symptom cells. From the principal component analysis, we could draw that the difference in the formulae was determined by thermodynamic parameters like and . Principal component analysis by UPLC showed that the composition differences in coptis-evodia herb couples were X 2 , X 3 , X 4 , X 8 , epiberberine hydrochloride, jatrorrhizine hydrochloride, coptisine sulphate, palmatine hydrochloride, and berberine hydrochloride. Further, changes in dissolution rate might be involved in them. These might be the material basis for the transfer of "cold" and "hot" properties in coptisevodia herb couples. In summary, the CHPD combined with microcalorimetry technology is a good method to determine the differences in the "cold" and "hot" natural properties of coptis-evodia herb couples. Further, UPLC is efficient to confirm the material basis.
4,179
2015-08-26T00:00:00.000
[ "Biology" ]
Maris polarization in neutron-rich nuclei We present a theoretical study of the Maris polarization effect and its application in quasi-free reactions to assess information on the structure of exotic nuclei. We discuss the uncertainties in the calculations of triple differential cross sections and of analyzing powers due the choices of various nucleon-nucleon interactions the optical potentials and limitations of the method. Our calculations explore a large number of choices for the nucleon-nucleon (NN) interactions and the optical potential for nucleon-nucleus scattering. Our study implies that polarization variables in (p,2p) reactions in inverse kinematics can be an effective probe of single-particle structure of nuclei in radioactive-beam facilities. Elastic differential cross sections of polarized protons incident on nuclear targets display an interference pattern due to the scattering by the near and the far side of the nucleus. A crucial part of this interference pattern is due to the sign change of the angular momentum in the S·L spin-orbit part of the optical potential (see, e.g., Ref. [1]). Other types of direct collisions using polarized protons are also influenced by the sign of the spin-orbit part of the optical potential. With the availability of highenergy radioactive beams, quasifree (p,2p) and (p,pn) reactions in inverse kinematics have again become an experimental tool of choice to study nuclear spectroscopy. Newly developed detectors have allowed efficient experiments using inverse kinematics with hydrogen targets and opened new possibilities to investigate the singleparticle structure, nucleon-nucleon correlations in the nuclear matter, and other important nuclear properties as the neutron-to-proton ratio of secondary beam projectiles increases. These new developments are possible due to the detection of all outgoing particles, providing kinematically complete measurements of the reactions being carried out at the GSI/Germany, RIKEN/Japan, and other nuclear-physics facilities worldwide [2][3][4][5]. So far, the experiments have focused on the reliability of quasifree scattering using inverse kinematics as a technique to study the shell-evolution in neutron-rich nuclei, but detailed studies such as the quenching of spectroscopy factors and single-particle properties of neutron-rich nuclei have also been reported recently [6]. Concomitantly, theoretical interest on (p,2p) reactions is again on the rise [3,[7][8][9]. In this Letter, we explore the details of the "Maris effect" [10][11][12][13] systematically in dependence of the neutron-to-proton asymmetry. We show that the effective polarization of knocked out protons increase steadily with the neutron number. The Maris effect on the spin orientation of the ejected nucleon is caused by the ac-tion of the spin-orbit and absorption parts of the optical potential combined with the distinct occupations in the single-particle j > = l + 1/2 and j < = l − 1/2 orbitals [14]. Next, we mention the spin variables of the incident proton, although the same argumentation applies to the knocked-out nucleon. In fact, the Maris polarization effect was proposed as a measure of the polarization of the ejected nucleon. Suppose that the primary spin-up polarized proton is detected at an angle θ, as depicted in Figure 1. Protons hitting initially polarized spin-up nucleons in a j-orbital with their incoming momenta directed toward the near side, correspond to L · S < 0 and to L · S > 0 if the protons are directed to the far side. Because of their smaller path within the nucleus, for collisions happening at the near side the protons will undergo less attenuation than those involved in collisions at the far side. Therefore their initial polarization is modified less than if they were knocked out from the far side. The optical potential dependence on the L · S spin-orbit term combined with absorption will thus impact on the polarization changes from near and far side scattering (part (a) of Figure 1). The polarization of the incoming proton does not change when the collisions are summed over all nucleons removed from a closed subshell if the momentum distributions of nucleons within the subshell are identical and if the nucleon-nucleon (NN) interaction is spin-independent (part (b) of Figure 1). However, the NN-interaction has a known spin-dependence for (spin-up)-(spin-up) and (spin-down)-(spin-up) cross sections for the triplet and singlet scattering. Hence, one should expect a change in the proton polarization due to the subshell occupancy and its effect will be larger if more nucleons occupy that subshell, i.e., twice as large for p 3/2 than for p 1/2 subshells. The combination of absorption, the spin-orbit part of the optical potential, and the spin-dependence of the NN-interaction leads to the Maris polarization effect, most evident in the observation of the analyzing power of the scattered protons, Observing A y requires the detection of the knocked out nucleon by incoming polarized protons with opposite polarizations. It is also expected that the Maris effect is of opposite sign for the 1p 1/2 compared to the 1p 3/2 orbital. For more details on the Maris polarization effect, and its applications to nuclear spectroscopy, see, e.g., Refs. [10][11][12][13][14] The Maris polarization effect is a well established experimental tool, e.g. in (p,2p) reaction studies of nuclear medium effects on the NN-interaction [10][11][12][13][14][15][16][17][18][19][20][21]. It has also been employed to investigate medium modifications of the nucleon and meson masses and the meson-nucleon coupling constants in the nuclear medium, motivated by strong relativistic nuclear fields, deconfinement of quarks, and also partial chiral symmetry restoration [22][23][24][25][26][27][28][29]. It is worthwhile noticing that there are various distinct spinorbit interactions involved in the Maris effect: (a) the spin-orbit part of the optical potential for the nucleonnucleus scattering, (b) the spin-orbit interaction responsible for the j < and j > occupancy of the knocked out nucleon orbital, and to a lesser extent, (c) the spin-orbit part of the NN-interaction. The triple differential cross section for quasifree scattering in the Distorted Wave Impulse Approximation (DWIA) is given by [15] where K F is a kinematic factor, p 0 (p 1 ) denotes the momentum of the incoming (outgoing) proton, p 2 the momentum of the knocked-out nucleon, and T 2 its energy. C 2 S is the spectroscopic factor associated with the singleparticle properties of the removed nucleon and ψ jlm is its wavefunction, labelled by the jlm quantum numbers. The DWIA matrix element includes the scattering waves χ σp for the incoming and outgoing nucleons, with information on their spins and momenta, (σk), as well and the t-matrix for the nucleon-nucleon scattering. To firstorder this t-matrix is directly proportional to the free NN scattering t-matrix, τ pN . For unpolarized protons, Eq. (2) is averaged over initial and summed over final spin orientations. This formalism has been used previously and a good description of experimental data has been obtained with a proper choice of the optical potential and of the NN-interaction (see, e.g., Refs. [16,20]). In Ref. [3] it was shown that momentum distributions of the residual nuclei obtained in quasi-free scattering are well described using the eikonal approximation for the scattering waves χ pi entering Eq. (2). The method, appropriate for highenergy collisions, allows to easily include relativistic and medium effects and a connection with partial waves can be done for large angular momenta with L = pb, where p is the incident momentum and b the impact parameter. Here, we adopt the DWIA and the partial wave expansion method described in various publications, e.g., Refs. [10-13, 15, 16, 20, 22, 24]. The inputs for the calculations following Eq. (2) are (a) the optical potential for nucleus-nucleus scattering, (b) the NN-interaction, and (c) the ejected nucleon wavefunction ψ jlm . For simplicity, the single-particle energies and wavefunctions ψ jlm of the ejected nucleon are calculated with a global Woods-Saxon potential model in In Figure 2 we show the calculated cross sections for 40 Ca(p,2p) 39 K and incident proton energy E p = 148 MeV, as a function of the recoil momentum, p A−1 of the residual nucleus. The proton knockout is assumed to be from the 1d 3/2 and 2s 1/2 orbitals in 40 Ca. The cross sections are integrated over the energy of the knocked-out proton and are given in units of µb sr −2 MeV −1 . The optical potential of Ref. [19] and the NN-interaction of Ref. [23] were employed. The experimental data are taken from Ref. [17]. The dashed (solid) lines include (do not include) the spin-orbit part of the optical potential. In agreement with the conclusions of Refs. [17,18], we find that the spin-orbit part of the optical potential plays a small role in the description of the triple-differential cross sections for unpolarized protons. The inset panel in Figure 2 shows a comparison of our calculations with the experimental data of Ref. [17] for the 1s 1/2 state as various NN-interactions are used. The shaded area includes results for seven NN-interactions taken from Refs. [21,23,[30][31][32][33][34]. We have observed that the choice of the NN-interaction has a greater impact on the results for unpolarized protons than the strength of the spin-orbit part of the optical potential. The same conclusion applies for the proton removal from the 1d 3/2 orbital. Similarly, different choices for the other parts of the optical potential adopted also yield a broad range of results. We will discuss this problem again in the context of the Maris effect. In Figure 3 we show our calculations for the tripledifferential cross sections (left) and analyzing powers (right) in (p,2p) reactions with 6 Li, 12 C and 40 Ca at incident proton energy of 392 MeV. The data are taken from Ref. [24]. To achieve a reasonable agreement with the experimental data, we use the NN interaction from Ref. [35] and the Dirac phenomenological optical potential from Ref. [36]. The solid lines include the spin-orbit part of the optical potential and the calculations have been normalized to the data for d 3 σ/dΩ 1 dΩ 2 dT 1 . Due to the nature of the data analysis [24], we do not try to identify the normalization values as spectroscopic factors. The dashed lines display our calculations without the spin-orbit part of the optical potential. Protons removed from the s-shell are chosen because the interpretation is rather simple as the Maris polarization should be null (for the knocked out nucleon, S = 0 and thus L · S = 0), although the knocked out proton can still acquire a non-zero angular momentum with respect to the (A-1) residue after the collision due to Final State Interactions (FSI). In fact, we observe that the spin-orbit part of the optical potential still plays a small but nonnegligible role in our results. As suggested in Ref. [12], the Maris polarization effect should be manifest in measurements of A y , i.e., it should be visible in analyzing power data, specially for protons removed from p-orbitals. This is best seen if A y is displayed for fixed angles of the outgoing protons while scanning the energy of the ejected proton, as seen in Figure 4. The data are from Ref. [37]. One proton is measured at 30 • and the other at −30 • . The open circles are data for protons removed from the 1p 1/2 and the solid ones from the 1p 3/2 orbital. In our calculations, shown by dashed and solid lines, we have employed the same NN-interaction and optical potential model as in the calculations presented in Fig. 2. As the number of neutrons increases in an isotopic chain, the nuclei should develop a larger neutron skin. The charge distribution in stable nuclei is well determined via electron scattering experiments but similar experiments on unstable nuclei are very difficult, still far from being fully viable [38][39][40][41]. The determination of the neutron skin in a nucleus requires separate measurements of the matter density. Efforts in this direction involve the measurement of interaction cross sections [42], total neutron-removal cross sections [43], parity violation in weak interaction with electron scattering [44], Coulomb dissociation [45], antiprotonic atoms [46], dipole polarization in (p,p') scattering [47], etc. The analyzing power, being a ratio of cross sections, factors out some of the uncertainties associated in the calculations. Moreover, because the spin-orbit part of the optical potential is peaked at the nuclear surface, the Maris effect is more sensitive to the surface region of the nucleus than the cross sections for unpolarized protons. Since the ejected nucleon spin will be depolarized more and more by the absorption effect when the nuclear size and neutron skin increases, a dependence of the Maris polarization with the neutron-skin thickness could be expected. Based on the arguments above, we consider the Maris polarization effect in neutron-rich nuclei and its dependence on the neutron number along a typical isotopic chain, e.g., for tin isotopes. Our calculations are not intended to be accurate, but to use the state-of-the-art theoretical knowledge one has on nuclear densities to explore the evolution of the Maris effect with the variation of the neutron skin. Most global optical potentials for proton-nucleus scattering reflect nuclear sizes and their dependence on the total number of nucleons, being insensitive to the build-up of a neutron skin in the nuclei. In order to study the role of the nuclear density and its neutron skin, we construct an optical potential from a folding model of the nuclear density with an effective nucleon-nucleon interaction. We chose the well-known Franey-Love interaction [30]. For the nuclear densities we adopt two models: (a) densities calculated with the Hartree-Fock-Bolgoliubov (HFB) method and with the BSk2 Skyrme interaction as described in Ref. [48], and (b) with constant densities up to a sharp-cutoff radius. The microscopic HFB calculations are used to estimate the neutron skin of the nuclei along the isotopic chain. The neutron skin, defined as ∆R = < r 2 n > − < r 2 p > is extracted from the HFB calculations and used in part (b) of the prescription above to generate (properly nor- The open circles are calculated using HFB densities, whereas the squares use sharp-cutoff densities. In this case the normalized proton and neutron sharp-cutoff densities are assumed to have the same neutron skin ∆R as those obtained with the HFB densities. The inset panel shows calculations for the same case as those performed for the dashed curve in the larger panel but with various optical potentials [19,30,36,[50][51][52][53][54]. We assume that protons are detected at θ = 35 • and θ = −35 • , respectively. The dashed curve shows calculations using a single sharp-cutoff density adding up the proton and neutron densities and a single nuclear radius equal to R + ∆R/2, with ∆R calculated from the HFB densities. malized) proton and neutron sharp-cutoff densities. We quantify the magnitude of the Maris polarization in terms of the difference between the first maximum of the 2p 1/2 and the first minimum of the 2p 3/2 orbital, denoted by The choice of the 2p 1/2 and 2p 3/2 orbitals to explore the effect of neutron excess is arbitrary. But it is worthwhile mentioning that the single-particle 2p orbitals in tin isotopes are probably highly fragmented. This would have to be taken into consideration in future experiments. The single-particle wavefunctions ψ jlm for these orbitals could be extracted from the HFB calculations, but for convenience we adopt the global Woods-Saxon potential defined previously to calculate the bound states along the tin isotopic chain. All 2p orbitals in tin are bound within this approximation. In Figure 5 we plot ∆A y for (p,2p) reactions at E p = 200 MeV with the densities defined in (a) and (b) discussed above. We assume that the two protons are detected at θ = 35 • and θ = −35 • , respectively. Using the lower scale, the graph shows the dependence of the observable in Eq. (3) as a function of the neutron excess (open circles), while the upper scale shows the same quantity as a function of the neutron skin (closed circles). These results imply that the increasing neutron number in an isotope leads to a larger magnitude of the Maris polarization effect. The effective polarization increases by more than 30% along the tin isotopic chain. The dependence with the neutron skin is almost linear, although deviations from the linear proportionality appears for large neutron excess. Since the proton density radius is nearly constant along the isotopic chain, as estimated with the HFB calculations, the steady increase of ∆A y is a clue for the build-up of neutrons at the nuclear surface. In Figure 6 we show a comparison between the calculations displayed in Figure 5 for Eq. (3) (open circles) with those using sharp-cutoff densities, displayed as red squares in the figure. In this case the normalized proton and neutron sharp-cutoff densities are assumed to have the same neutron skin ∆R as those obtained with the HFB densities. There are appreciable differences between the two calculations reflecting the fact that the quantity defined in Eq. (3) is also sensitive to the details of the densities such as their diffuseness. In Figure 6 we also show a dashed curve calculated with a single sharp-cutoff density adding up the proton and neutron densities and a single nuclear radius equal to R + ∆R/2, with ∆R calculated from the HFB densities. Despite small deviations from the previous result displayed as red squares in the figure, the ∆A y increase along the isotopic chain for the dashed-line is also representative of the increase of the nuclear radius, irrespective if the nuclear densities display a neutron skin or not. The inset panel in Figure 6 shows calculations for the same case as that performed for the dashed curve in the larger panel but now, for the inset, we adopt a plethora of optical potentials [19,30,36,[50][51][52][53][54]. We observe a strong dependence of ∆A y on the optical potential adopted, as expected. Nonetheless, ∆A y still displays an increase with the neutron number in the isotope. We have also noticed that a similar result and conclusion is obtained for its dependence of ∆A y on various NN interactions, i.e., ∆A y is also strongly dependent on the NN-interaction used. Therefore, using ∆A y as a probe of the nuclear size or the neutron skin in nuclei invokes a complementary study of other observables to determine the optical potential parameters as well as the adequate NN-interaction to be used in the theory. In conclusion, the Maris polarization effect is well known as a tool to investigate single-particle properties in nuclei. It has not been widely explored yet to study the evolution of nuclear properties in neutron-rich isotopes. Its sensitivity to the shell occupancy of orbitals with the same angular momentum allows for new applications in experimental studies carried out with secondary radioactive beams. Because experiments can now be carried out with a much larger precision than in the past, new techniques are increasingly being introduced to extend our knowledge of the nuclear physics of neutron-rich nuclei. We demonstrate that the magnitude of the Maris polarization effect increases with the neutron excess. However, the increasing magnitude of the effect cannot be related in a straightforward manner to the development of the neutron-skin thickness in neutron-rich nuclei, but rather depends as well on the size of the nucleus and also on the diffuseness of the densities at the surface. The slope of the dependence of the calculated analyzing power with the neutron excess does not vary substantially, neither with the selection of the NN interaction or with the optical potential. But, in contrast, its absolute magnitude does show a strong dependence on the choice of these two interactions. This work was supported in part by the U.S. DOE grant DE-FG02-08ER41533 and the U.S. NSF Grant No. 1415656. We thank HIC for FAIR for supporting visits (C.A.B.) to the TU-Darmstadt.
4,557
2017-07-28T00:00:00.000
[ "Physics" ]
Taxonomy of Fuzzy Multi-Attribute Decision Making Systems in Terms of Model , Inventor and Data Type Decision support systems are one of the choices decision-makers make in an attempt to cope with the problems related to the time length required in decision-making process. Such systems are known to improve the efficiency and accuracy in the decision-making processes. In developing a decision support system, a certain calculation method is required as part of its processing. One of the most commonly used methods is FMADM. This research discusses the clustering of decision support system using FMADM in an attempt to provide a taxonomy of decision support system based on FMADM. Keywords-artificial intelligence; decision support system; fuzzy; taxonomy INTRODUCTION A decision support system (DSS) is a computerized system that will provide results in the form of ranking based on the assessment aspects determined by decision makers.DSSs are derived from expert systems and are part of the artificial intelligence (AI) field and of the applications that aim to help solving common knowledge-based cases [1].DSSs are systems that try to gather and exploit human knowledge and experience in artificial intelligence systems so that they may assist in, or even perform, decision making [2].Some examples of research on expert systems are stroke detection [3], animal disease identification [4,5] and motor engine damage detection [6].One of the algorithms used in DSSs is the Multiple Criteria Decision Making (MCDM) algorithm.However, MCDM is divided into several types.This paper, following a similar approach to the one in [7], provides a short literature review on MCDM taxonomy focusing on Multi Attribute Decision Making (MADM) aiming to provide a taxonomy of Fuzzy Multi-Attribute Decision Making Systems in Terms of Model, Inventor, and Data Type Methods. II. RESULT AND DISCUSSION MCDM is a decision-making method that can be used to establish the best choice from a number of alternatives based on certain criteria, e.g.size, standard etc [8].However, MCDM has a minor disadvantage: if the data provided by the decision maker or the attribute of the data is incomplete, then the resulting decision will contain uncertainty.The problem of uncertainty can be caused by several things, namely: 1. Information that cannot be calculated, 2. Incomplete information, 3. Unclear information and 4. Partial abandonment [9].To solve these problems, some research on the use of Fuzzy MCDM began to be conducted in order to find methods that proved to have excellent performance.FMCDM can be divided into 2 models: fuzzy multi objective decision making (FMODM) and fuzzy multi attribute decision making (FMADM).FMADM model then can be further divided into 2 models namely the Yager and the Baas & Kwakernaak model.Based on the type of data, FMADM can be divided into 3 types, namely fuzzy data, crisp data, fuzzy and crisp data [10].While based on the method of application, FMADM can be divided into 3 types, namely SAW method, WP method and TOPSIS.FMADM taxonomies are shown in Figures 1-4 and are presented below. A. FMADM Inventor-Based Taxonomy 1) Yager Model The Yager model FMADM is the standard form of FMADM.According to [11],Yager model has 5 completion stages, which are: 1. Set a pairwise comparison matrix between attributes M based on Saaty's hierarchy procedure. 2. Determine the consistent weight of w j for each attributes for each attribute based on the eigenvector method of Saaty. 3. Calculate the value of 4. Determine the intersection of all 5. Select with the largest membership degree in , and set the optimal alternatives. One of the researches related to DSS using Yager method is [12] which emphasize on theapplication of DSS to solve cases about the determination of families as poor.A similar research, [13], was conducted to solve the best customer selection case.Both researches resulted in a desktop-based decision support system that was able to assist the decision-making process in their respective cases. 2) Baas In contrast odel is not a ften used by s he Baas &Kw nking of som ts [14]. Fig Fig
912.8
2018-02-20T00:00:00.000
[ "Computer Science", "Mathematics" ]
Characterizations and Antibacterial Efficacy of Chitosan Oligomers Synthesized by Microwave-Assisted Hydrogen Peroxide Oxidative Depolymerization Method for Infectious Wound Applications The use of naturally occurring materials with antibacterial properties has gained a great interest in infected wound management. Despite being an abundant resource in Vietnam, chitosan and its derivatives have not yet been intensively explored for their potential in such application. Here, we utilized a local chitosan source to synthesize chitosan oligomers (OCS) using hydrogen peroxide (H2O2) oxidation under the microwave irradiation method. The effects of H2O2 concentration on the physicochemical properties of OCS were investigated through molecular weight, degree of deacetylation, and heavy metal contamination for optimization of OCS formulation. Then, the antibacterial inhibition was examined; the minimum inhibitory concentration and minimum bactericidal concentration (MIC and MBC) of OCS-based materials were determined against common skin-inhabitant pathogens. The results show that the local Vietnamese chitosan and its derivative OCS possessed high-yield purification while the molecular weight of OCS was inversely proportional and proportional to the concentration of H2O2, respectively. Further, the MIC and MBC of OCS ranged from 3.75 to less than 15 mg/mL and 7.5–15 mg/mL, respectively. Thus, OCS-based materials induce excellent antimicrobial properties and can be attractive for wound dressings and require further investigation. Introduction Chitosan is a derivative of chitin, the second abundant biopolymer found in shrimp and crab shells. Chitosan mainly comprises of linear N-acetyl glucosamine and β-1,4linked D-glucosamine units. Chitosan has been broadly exploited in a variety of fields, including agriculture, food industry, water treatment, biotechnology, pharmaceutics, and many others, thanks to its pronounced biocompatibility, biodegradability, low cost, and non-toxicity [1][2][3][4][5]. Chitosan is highly viscous in aqueous solution due to its high molecular weight and is a prominent pH-responsive biopolymer, which is soluble in mild acidic solution with pH below 6.3 while becomes insoluble with gel-forming ability at physiological pH (~7-7.4) [6][7][8]. These properties, however, may limit chitosan applicability in medicine and biomedical studies that involve physiological conditions. Therefore, chitosan oligomers (OCS), derived from chitin or chitosan, with high water solubility thanks to shorter chain and free amino (-NH 2 ) groups in D-glucosamine units, have emerged as a promising alternative [9,10]. Furthermore, OCS also exhibit bioactive properties such as antimicrobial, anti-inflammatory, antifungal, antitumor, and so on, making them highly desired for biomedical applications such as wound healing scaffolds or regenerative medicine. In terms of the fabrication method, a variety of chemical, enzymatic, and physical processes were proposed [11][12][13]. The enzymatic approach to degrade chitosan into OCS is simple and environmentally friendly, yet its production cost can be exorbitant [14,15]. Meanwhile, the physical approaches using ultrasonic, microwave, and gamma rays can be employed for the synthesis of high purified OCS [16][17][18][19]. However, it remains difficult to scale up the production of OCS using physical methods due to the lack of corresponding facilities [20]. On the other hand, the chemical approach, such as oxidative degradation of chitosan using hydrogen peroxide (H 2 O 2 ), which can be used for large-scale production of OCS at a reasonable cost, has emerged as a promising alternative and received a great research interest [21][22][23]. Nevertheless, the relative molecular weight of OCS produced by this method is widely distributed, thus, requires intensive post-treatment to separate and purify the products [11]. Qin et al. enhanced the oxidative degradation of chitosan for the production of OCS using conventional heating-combined H 2 O 2 treatment [24]. Najafabadi et al. proposed a UV irradiation-H 2 O 2 system to enable a faster and more efficient OCS production [25]. Besides, to improve the selectivity, reaction rate, and production efficiency, several studies utilized microwave irradiation as an effective solution for enhanced oxidative degradation of chitosan chains [23,26]. In the current study, the use of microwave-assisted H 2 O 2 treatment for the production of OCS was employed for convenience as well as time-and cost-saving benefits. Vietnam was one of the largest shrimp exporting countries in the world, with a network worth billions of dollars per year [27]. However, the crustacean processing may lead to the tremendous gross weight of shell waste, causing a large environmental burden. For both economic and environmental benefits, the use of local resources or biowaste of the crustacean processing for the production of OCS emerged as a promising and sustainable solution. Several studies in Vietnam attempted to synthesize OCS using different approaches such as H 2 O 2 degradation and gamma irradiation combined with H 2 O 2 treatment for mainly agricultural applications [18,19,21,28]. Nevertheless, studies utilizing the local chitin source to produce OCS for infectious wound management purposes are lacking. In this study, we aim to investigate the antibacterial efficacy of OCS derived from a local Mekong Delta, Vietnam source using microwave-assisted H 2 O 2 treatment as a promising wound dressing material. First, the concentration of H 2 O 2 was varied to determine its effects on the physicochemical properties of the synthesized OCS. Then, we examined the minimum inhibitory concentration and minimum bactericidal concentration (MIC and MBC) of OCS using dilution test and investigated the inhibitory effects of OCS-based tablets using agar diffusion test to determine the optimal OCS formulation. Recently, the surface modification or coating of antibacterial agents onto wound dressing has become a key approach for the fabrication of bacterial-preventive materials [28,29]. To demonstrate the applicability of the synthesized OCS for infectious wound management, the optimal OCS formulation was coated onto electrospun poly(E-caprolactone) (EsPCL) membrane using the multi-immersion technique [30] and investigated the antibacterial inhibition of the OCS-coated EsPCL (EsPCLOCS) membranes. The findings suggest that the combination of this versatile synthesis and local supplied chitosan produces high-purified OCS with excellent antibacterial activities is promising and beneficial in terms of economic efficiency. Further, the EsPCLOCS membranes exhibit good antibacterial effects and require further investigations as potential wound dressing materials. Preparation of Chitosan Oligomers (OCS) The OCS preparation method was reported elsewhere with some modifications [23]. Chitosan powder was immersed into different concentrations (5%, 10%, and 15% v/v) H2O2 solutions at 30 • C in 10 min. The mixture was microwave-irradiated at 400 W for 3 min, and the solution was then cooled down to room temperature. The solution was introduced into EtOH at a volume ratio of 1:3 and using Sigma 3-30KS at the speed of 10,000 rpm× g at 4 • C for 15 min to collect the precipitant, which was then lyophilized with LABCONCO 7,752,020 series to obtain OCS powders. The OCS powders were compressed into tablets with a mass of 120 mg and 13 mm in diameter by the Hydraulic Single tablet punching machine (Shanghai Pharmaceutical Machinery Co. Ltd., Shanghai, China) for further investigations. Characterization of Chitosan Oligomers Gel Permeation Chromatography (GPC) The weight average (M w ), number average (M n ) of molecular weight, and polydispersity index (PDI) of OCS were measured by Gel Permeation Chromatography (GPC) (Shimadzu/LC-10ADvp, Kyoto, Japan) with refractive index detector RID-10A. The system was carried out in the water as a mobile phase. All the samples were dissolved at 1.5 mg/mL in 0.3 M acetic acid and 0.2 M sodium acetate and filtered before GPC measurement with a flow rate of 0.8 mL/min at 40 • C with a sample volume of 20 µL. The Pullulan standards with a molecular weight range from 1.42 to 1220 kDa were used for calibrating OHpak SB-804 HQ columns (dimension 8 mm × 300 mm). The depolymerization efficiency (DE) is calculated based on the following formula: Nuclear Magnetic Resonance (NMR) Spectrometer The 1 H-NMR spectrum of COS was measured by using liquid-state 1 H-NMR (400 MHz, δ in ppm; Bruker Avance-400 MHz FT-NMR (Bruker Corp, Billerica, MA, USA). The chitosan oligomers samples were dissolved in DMSO/DCl and filtered prior to NMR measurement. The DD of OCS was then calculated based on the following equation [31]: where A 1 are the protons integral values of positions C 2 -C 6 on the sugar ring, which was the average area measured in the range δ 3-6 ppm, and A 2 are the protons integral values of the three N-acetyl protons of N-Acetyl glucosamine at around δ 2 ppm. Inductively Coupled Plasma Mass Spectroscopy (ICP-MS) Chitosan and the OCS solution were evaluated for the contamination of lead (Pb), arsenic (As), and mercury (Hg), which are common heavy metals detected in shrimp shells-extracted products, by using ICP-MS (NexION2000, Perkin Elmer, MA, USA). The minimum limit of detection of all the contaminants is 0.02 ppm. Preparation and Morphological Characterization of EsPCLOCS Membrane Preparation of EsPCLOCS Membrane The EsPCL membrane was fabricated as previously reported [32]. Briefly, PCL was dissolved in acetone/acetic acid solution with v/v ratio of 7:3 in 24 h to create 22% (w/v) PCL solution. Then the solution was filled into a syringe pump and electrospun with a tip-to-collector distance of 10 cm and voltage of 15 kV. For fabricating EsPCL coated with OCS (EsPCLOCS) membrane, EsPCL membranes (50 cm × 30 mm) were plasma-treated with Harrick Plasma Cleaner PDC-32G-2 (GaLa Instrumente, Bad Schwalbach, Hawai, Germany) for 3 min at 30 W and 13.56 MHz. To prepare the EsPCLOCS membrane, a 3% w/v OCS solution was obtained by dissolving OCS15% powder in deionized water, then was sprayed perpendicularly on the surface of the EsPCL membranes. Then, the samples were incubated at 37 • C for 30 min for the adsorption of OCS onto the fibers. Three different EsPCLOCS samples were fabricated, labeled as C1, C3, and C6, corresponding to the one, three, and six coatings. All the samples were sterilized by UV irradiation for 45 min before further antibacterial tests. Morphological Characterization The EsPCL and EsPCLOCS membranes were observed using scanning electron microscopy (SEM) (JSM-IT100, JEOL, Tokyo, Japan) with gold sputter-coating (JEOL Smart Coater, Tokyo, Japan) at 10 kV to evaluate the morphology of the membrane and the OCS layer covered on EsPCL fibers after each coating time. Antibacterial Assays In the bacterial experiments, agar disc diffusion MIC/MBC methods were applied to determine the antibacterial effects of OCS on five strains of skin-habitant microorganisms, including S. aureus, P. aeruginosa, and S. iniae bacteria and C. albicans and T. insectorum fungi. For EsPCLOCS membranes, the antibacterial properties evaluation was performed with S. aureus and P. aeruginosa. Prior to the experiments, a colony of each strain was collected from an agar plate, transferred to a 5 mL MHB tube, and cultured at 37 • C for 24 h. Then bacterial suspension of each strain at an optical density at 620 nm OD 620 = 0.08-0.1 (equal to 0.5 McFarland standards, approximately 1-2 × 10 8 CFU/mL) was obtained by dilution. Agar Disk Diffusion The inhibitory effect of each sample against a specific bacterial strain was examined separately on the Mueller-Hinton agar (MHA) plate. Briefly, 150 µL of the prepared bacterial suspension was added into and spread on the MHA surface. Then, the OCS tablets and EsPCLOCS membranes with a diameter of 13 mm and 8 mm, respectively, were placed on the MHA plate and incubated for 24 h. The bacterial growth inhibition zone around the samples was determined. Minimum Inhibitory Concentration (MIC) and Minimum Bactericidal Concentration (MBC) Determination of MICs of the OCS15% was carried out on a 96-well plate, with up to 10 different dilution concentrations were tested per row. The OCS15% powder was dissolved in MHB for 24 h to obtain 2X solution (60 mg/mL). One hundred microliters of MHB were first dispensed into all wells. The 2X antibacterial solution was added to the first well and mixed well with MHB. Then 100 µL of the mixture in the first well was transferred to the corresponding well, and the process was repeated to obtain the 100 µL of MHB containing two-fold dilutions of OCS15% in the first ten wells. Finally, 100 µL of prepared bacterial suspension was added in columns 1st to 11th. After that, the microplate was incubated at 37 • C for 24 h. The absorbance was obtained using a microplate reader at a wavelength of 620 nm. MIC results were determined as the lowest concentration where no growth is observed. To determine MBC, 10 µL of the mixture from the first to the tenth well was inoculated in a circle on an MHA plate following a clockwise direction, while 10 µL of the 11th well was plated in the middle. The MHA plate was incubated right side up at 37 • C for 24 h. MBC will correspond to the lowest concentration with no observation of bacterial growth on the MHA [33]. Statistical Analysis All experiments were conducted in triplicate unless specified otherwise. Statistical analysis was performed by using Sigma Plot V.12.0 version (SSI, Chicago, IL, USA). The differences between samples were analyzed by one-way analysis of variance (ANOVA) followed by Tukey Kramer post hoc test. The data were expressed as the mean ± standard deviation, and p < 0.05 was considered to be statistically significant. Characterizations of Chitosan Oligomers (OCS) The need for highly purified chitosan is highly desired for biomedical applications. According to ICP-MS results shown in Table 1, only the sample of chitosan found the appearance of toxic metals; meanwhile, their traces were not detected in OCS. Chitosan contained 0.052 ppm of lead and 0.05 ppm of arsenic in the sample. Meanwhile, the toxic concentration of lead, arsenic, and mercury are reported to be 0.5, 0.15, and 1.5 ppm, respectively, according to United States Pharmacopeia for Food and Drugs. Thus, based on the ICP-MS results, there was an insignificant level of two of those heavy metals (lead and arsenic) detected in chitosan from the local Vietnamese brand. OCS synthesized using hydrogen peroxide oxidation under microwave irradiation was examined by evaluating its weight average (M w ), number average (M n ), and the polydispersity index showed in Table 2. The method showed high effectiveness in terms of depolymerization. The molecular weight of raw chitosan was measured as 310 kDa and dropped significantly from 4500 to 12,600%. The molecular weight of OCS decreases gradually as the hydrogen peroxide concentration raised. The highest molecular weight among the three samples belongs to OCS5%, with an average of 7 kDa. Meanwhile, the average M w of OCS10% is about 5 kDa, which is approximately double that of OCS15%. Supplementary Materials Figure S1 shows that the distribution of M w of OCS5% and OCS10% is overlapped with a considerably broad curve. In particular, OCS10% has the range of M w from 178 Da to 20 kDa with PDI of 2.5, while OCS5% has the range of M w distributing under 25 kDa. Meanwhile, OCS15% reveals a narrow distribution curve with the M w under 14 kDa and PDI of 2.16. Figure 1 shows the 1 H-NMR spectra of OCS15%. The integral values are determined by H molecules from the C2-C6 in the sugar ring of chitosan monomers (H 2-6 ) from δ 2.6 to 4.2 ppm and N-acetyl methyl group with δ around 2 ppm. The DD was calculated based on the integral value in the H-NMR figures. The data showed that all the samples had DD over 90%. Furthermore, the method tended to slightly increase the DD of the samples compared to the original chitosan. However, there were no significant differences in the DD of three samples, in which the highest DD belonged to OCS15% (95.71%), whilst OCS5% and OCS10% had the lower DD of around 92%. Table 3 shows the MIC and MBC concentrations of OCS15% against five different pathogens. In general, the MIC and MBC of OCS were closely comparable regardless of pathogen types. The MIC of all the bacteria and fungi were not precisely determined except for P. aeruginosa at 3.75 mg/mL. The MIC concentrations of OCS15% can be found in a range of 3.75 and 7.5 mg/mL with T. insectorum and 7.5 to 15 mg/mL in the case of S. aureus, P. aeruginosa, and C. albicans. OCS was proved to have bactericidal and fungicidal properties rather than just bacteriostatic or fungistatic through MBC. S. iniae and T. insectorum was killed when the concentration of OCS reached 7.5 mg/mL, while S. aureus, P. aeruginosa, and C. albicans required 15 mg/mL of OCS. Table 3 shows the MIC and MBC concentrations of OCS15% against five different pathogens. In general, the MIC and MBC of OCS were closely comparable regardless of pathogen types. The MIC of all the bacteria and fungi were not precisely determined except for P. aeruginosa at 3.75 mg/mL. The MIC concentrations of OCS15% can be found in a range of 3.75 and 7.5 mg/mL with T. insectorum and 7.5 to 15 mg/mL in the case of S. aureus, P. aeruginosa, and C. albicans. OCS was proved to have bactericidal and fungicidal properties rather than just bacteriostatic or fungistatic through MBC. S. iniae and T. insectorum was killed when the concentration of OCS reached 7.5 mg/mL, while S. aureus, P. aeruginosa, and C. albicans required 15 mg/mL of OCS. The antimicrobial effects of OCS tablets on various bacteria and fungi are shown in Figure 2. All the OCS specimens exhibited inhibition against all pathogens. It seemed that the Gram-negative pathogen (P. aeruginosa) responded less sensitive, whilst fungi, especially T. insectorum, were vulnerably exposed by OCS. Moreover, in terms of Grampositive pathogens, similar zones of inhibition were seen in both S. aureus and S. iniae. Among those samples, OCS10% had the highest inhibitory effect on pathogens with the range of inhibitory zone from approximately 30 mm against P. aeruginosa to over 40 mm against C. albicans, whereas OCS15% exhibited less efficiency against those microorganisms. cially T. insectorum, were vulnerably exposed by OCS. Moreover, in terms of Gram-positive pathogens, similar zones of inhibition were seen in both S. aureus and S. iniae. Among those samples, OCS10% had the highest inhibitory effect on pathogens with the range of inhibitory zone from approximately 30 mm against P. aeruginosa to over 40 mm against C.albicans, whereas OCS15% exhibited less efficiency against those microorganisms. The absorption of OCS on the EsPCL membrane before and after several coatings was evaluated by SEM micrographs. Figure 3A shows the differences in morphology surface of EsPCL and EsPCLOCS membranes. In general, the fiber surfaces demonstrated that the amount of adsorbed OCS onto treated EsPCL membrane increased proportionally to increased spraying time ( Figure 3A(1)-(4)). In comparison to fiber diameters of EsPCL in Figure 3A(1), PCL fiber diameters enlarged gradually associated with the coating times. Moreover, the surface of EsPCL membranes was smoother than that of the EsPCLOCS membranes. In particular, OCS agglutinated around PCL fibers ( Figure 3A(2)), then covered the pores on the surface membrane ( Figure 3A(3)), and finally provided a complete coating layer over the whole membrane ( Figure 3A(4)). The absorption of OCS on the EsPCL membrane before and after several coatings was evaluated by SEM micrographs. Figure 3A shows the differences in morphology surface of EsPCL and EsPCLOCS membranes. In general, the fiber surfaces demonstrated that the amount of adsorbed OCS onto treated EsPCL membrane increased proportionally to increased spraying time ( Figure 3A(1)-(4)). In comparison to fiber diameters of EsPCL in Figure 3A(1), PCL fiber diameters enlarged gradually associated with the coating times. Moreover, the surface of EsPCL membranes was smoother than that of the EsPCLOCS membranes. In particular, OCS agglutinated around PCL fibers ( Figure 3A(2)), then covered the pores on the surface membrane ( Figure 3A(3)), and finally provided a complete coating layer over the whole membrane ( Figure 3A(4)). Antibacterial Properties The antibacterial property of EsPCLOCS membranes with different OCS coating times was determined using the agar disc diffusion method. Figure 3B displays the zone of inhibition of EsPCLOCS membranes against P. aeruginosa and S. aureus. The results show that both pathogens were inhibited in a coating-layer-dependent manner except sample C1, which showed no inhibitory effect against P. aeruginosa, and the Gram-negative strain was found to be more susceptible to the coated-membrane than the positive. In particular, it is clear that the increment of OCS coating layers onto the surface of EsPCL membranes led to a significantly higher inhibitory effect against the pathogen, in which C3 and C6 have the inhibition zone diameter of 9 ± 0.3 mm and 10.2 ± 0.4 mm against S. aureus, respectively. Figure 3A is 10 µm, and Figure 3B is 3 mm. Discussion In this study, we prepared OCS with different molecular weights by adjusting the H 2 O 2 concentration. H 2 O 2 oxidation under the support of microwave irradiation is a simple, inexpensive, and rapid method to enhance water solubility of chitosan by cutting the polymer chains of chitosan, to synthesize OCS. Based on the GPC results, the proposed method broke the chitosan polymer chains down to a molecular weight of around a few hundred Da to a few thousand Da, making it appropriate for antibacterial applications [34]. The main principle of oxidative degradation is based on the unstable of H 2 O 2 molecules. The H 2 O 2 could be easily decomposed under microwave and heat into hydroperoxyl anion (HOO − ), which later will interact with hydrogen peroxide molecules in order to form hydroxyl radical (HO·) and superoxide anion (O − 2 ). These two oxidants are more powerful to cut the polymer chain through the following reactions [26,35]: where, (GlcN) m and (GlcN) n are chitosan chain that have m and n glucosamine molecules, trespectively. Notably, the degradation of chitosan into OCS is dependent on the concentration of H 2 O 2 . It can be seen that the 5% concentration of H 2 O 2 inadequately degraded the polymer, mainly because of the shortage of H 2 O 2 to interact with chitosan. Meanwhile, the DD is unlikely to be affected by this microwave irradiation in combination with the H 2 O 2 treatment method, which is in contrast with other conventional methods [36]. The DD was not affected in this study can be due to the rapid reaction time accelerated by microwave; the H 2 O 2 molecules and microwave power only interact with chitosan chains to degrade the polymer chains rather than continuously react with the -NH 2 groups of chitosan. Furthermore, microwave irradiation was used in this study in combination with H 2 O 2 treatment to improve the reaction rate and production efficiency of OCS as well as to narrow the distribution range of OCS molecular weight. The average M w of OCS reduced significantly from 7 kDa to 2 kDa as H 2 O 2 concentration increased from 5% to 15%. The trend is similar to previous studies employing microwave-assisted H 2 O 2 treatment under different experimental conditions (i.e., H 2 O 2 concentration, microwave reaction time, microwave power, and so on) [20,23,37]. In this study, we used a relatively high concentration of H 2 O 2 that significantly reduced the reaction time to only 3 min as compared to the previous study that used a low concentration of H 2 O 2 and required a longer reaction time (more than 10 min) to obtain OCS of less than 2 kDa [20]. Further, the DD and the M w of OCS5% obtained in this study was much higher and lower, respectively, compared to these values reported in Zhang et al. that used shorter microwave time (75 s) although higher microwave power (650 W) and lower H 2 O 2 concentration (2%) [37]. In terms of antimicrobial effects, the pathogens used for in vitro antibacterial assays, including S. aureus, S. iniae, P. aeruginosa, C. albicans, and T. insectorum, are five common skin-habitant microorganisms [38][39][40][41]. In the moist environment of the agar disc, the OCS started to dissolve and diffuse after placing on the surface of the agar due to its high water-solubility and induced inhibitory effects against tested pathogens. Notably, OCS of different molecular weights induced different inhibitory effects. OCS15% with M w of 2-3 kDa tends to have a relatively low activity to all pathogens except C. albicans, although it has the best water solubility. On the other hand, OCS10%, which has M w around 5 kDa, provides the strongest influence in all pathogens. With M w of 7 kDa, OCS5% shows limited activity on all types of microorganisms except T. insectorum. The results agreed with a previous study in which different pathogens react differently with different M w of OCS [42]. Further, we noticed that the effect of OCS on pathogens might be different depending on their form. In particular, MIC results of OCS15% show the higher impacts on P. aeruginosa than Gram-positive S. aureus, only 3.75 mg/mL compared to about 7.5 mg/mL. However, in the agar diffusion test, there were no notable differences between the inhibitory zone of these strains. Similarly, C. albicans, which was strongly preventive to OCS solution with MIC of about 7.5 mg/mL, tended to be the most vulnerable in the agar test with the largest zone of inhibition. While chitosan is believed to be only bacteriostatic, its derivative, OCS, is not only exhibiting bacteriostatic but also revealing as a bactericide and fungicide. This characteristic was rarely reported due to lacking concern. Lee and colleagues previously claimed that 10 kDa OCS of 1 to 10 mg/mL could have stronger bactericidal effects on V. vulnificus, a life-threatening wound infection pathogen, than 1 kDa OCS [43]. That is similar to our results when OCS15% with a bare appearance of OCS over 10 kDa in its poly-distribution, whereas OCS5% and OCS10% with a broader range of M w and mean M w closely to 10 kDa. The capability of OCS could be explained by a multi-facet explanation and much related to chitosan. In terms of electrostatic interaction, with the deduction of M w , the -NH 2 group, which carries a positive compartment of chitosan structure, can increase the ability to attach to the negatively charged bacterial cell wall, hence, enhancing the antibacterial characteristics. Moreover, the smaller the M w of chitosan, the higher mobility of the polymer chains to bind to more colonies. Thus, the interaction between the bacteria and OCS happens and inactivates bacteria faster. This could be deduced that all the models of antibacterial of chitosan occurred, the bigger OCS fragments can attach to the cell wall then inhibit nutrient absorption. Likewise, the smaller fragments can diffuse through the cell membrane and bind to the DNA of bacterial, disturbing the mRNA transcription and protein synthesis. Furthermore, some researchers claimed that chitosan with a higher DD (over 90%) enhances stronger activity than lower DD (under 83.7%) [44]. The DD can contribute to the process due to the appearance of the more positively charged -NH 2 groups in the chitosan structure [45]. The incorporation of OCS onto the EsPCL membrane could become a promising approach in the development of bioactive wound dressings. OCS15% was utilized to coat onto EsPCL due to its best water solubility. Moreover, since OCS15% has the lowest anti-microorganism effect in the agar diffusion test, it is reasonable to believe that other samples will perform better in terms of bacterial prevention. Due to the hydrophobicity of PCL, the adsorption could be minimized during the coating process. Thus, the membrane was hydrophilized using the plasma treatment that exposed polar functional groups of hydroxyl and carboxyl (-OH and -COOH) [46] on the surface of electrospun PCL fibers creating hydrogen bonds with water molecules. The adsorption efficiency of OCS on PCL fibers was evaluated with SEM. The SEM images show that the coating of OCS into EsPCL was successfully coated onto the membrane (Figure 3). The procedure of OCS adsorption onto the EsPCL membrane is similar to the procedure of Gel and Ag in our previous research [30]. In the first step, OCS has attached to PCL fibers thanks to the hydrogen interaction of -NH 2 groups of OCS with both -OH and -COOH groups in the membrane surface. Then, when all the functional groups have interacted and the membrane is fulfilled by the OCS layer, the excessive OCS will be cross-linked with each other and increase the layer thickness. In addition, it is clear that the OCS formed into a layer in the membrane of which thickness increases correspondingly with the increase in coating times. When it comes to the agar disc diffusion results, the zones of inhibition show that C1 (mono-coating) has approximately no effects, and C6 (multi-coating) reveals the highest prevention against both types of bacteria. The difference between those membranes is mainly occurred by the amount of OCS released from the membrane. Therefore, the control of OCS coating on the EsPCL membrane is important and required further studies. Conclusions In this research, the chitosan and OCS derived from local Vietnamese sources were evaluated for their heavy metal contamination level, which was demonstrated to have adequate properties and high-yield purification for biomedicine applications. We employed H 2 O 2 treatment under microwave irradiation to degrade chitosan into OCS with M w from 2 to 7 kDa, in which the DD of chitosan was not affected due to the short irradiation time. The synthesized OCS were shown to possess not only strong inhibition against different skin-habitant microorganisms but also bactericidal effects against them. Furthermore, the use of tablet compression and coating methods for antibacterial testing is promising for evaluating the effects of antimicrobial agents in the powder formulation. The OCS15% sample possessed the most suitable molecular weight, water solubility, and antibacterial properties for further applications. Further investigation of incorporation of bioactive agents such as OCS and the EsPCL membrane should be considered regarding blood bleeding and actual wound healing effects on animal studies. Data Availability Statement: The data used to support the findings of this study are included in the article.
6,879.6
2021-08-01T00:00:00.000
[ "Materials Science" ]
Integrability and renormalizability for the fully anisotropic SU(2) principal chiral field and its deformations For the class of 1 + 1 dimensional field theories referred to as the non-linear sigma models, there is known to be a deep connection between classical in-tegrability and one-loop renormalizability. In this work, the phenomenon is reviewed on the example of the so-called fully anisotropic SU(2) Principal Chi-ral Field (PCF). Along the way, we discover a new classically integrable four parameter family of sigma models, which is obtained from the fully anisotropic SU(2) PCF by means of the Poisson-Lie deformation. The theory turns out to be one-loop renormalizable and the system of ODEs describing the flow of the four couplings is derived. Also provided are explicit analytical expressions for the full set of functionally independent first integrals (renormalization group invariants). Introduction One of the spectacular instances of when ideas from physics and geometry come together is in the study of a class of field theories known as the Non Linear Sigma Models (NLSM).Mathematically, these are defined in terms of maps between two (pseudo-)Riemannian manifolds known as the worldsheet and the target space such that the classical equations of motion take the form of a generalized version of Laplace's equation [1].In physics, one of the uses of NLSM is as low energy effective field theories with the choice of the target space being dictated by the symmetries of the problem.The first such proposal appeared in a paper of Gell-Mann and Levy [2].They put forward the following Lagrangian density as an effective field theory of pions: Here the last equation means that the four component field ⃗ n = (n 1 , n 2 , n 3 , n 4 ) is constrained to lie on the three dimensional round sphere whose radius coincides with 1/f .Thus the target space is S 3 equipped with the homogeneous metric while the worldsheet is four dimensional Minkowski spacetime M 1,3 .The field theory is known as the O(4) sigma model as it possesses O(4) symmetry -the group of isometries of the three-sphere.Ignoring global aspects, one may replace the latter by SU(2) × SU(2) which play the role of the vector and axial symmetries appearing in the 'chiral limit' of QCD.For this reason the model (1.1) is also referred to as the SU(2) principal chiral field. The O(4) sigma model is rather special in 1 + 1 dimensional spacetime M 1,1 .In this case, as was pointed out by Polyakov, the Lagrangian (1.1) defines a renormalizable QFT.Following the traditional path-integral quantization, the model should be equipped with a UV cutoff Λ [3].It was shown to one-loop order that a consistent removal of the UV divergences can be achieved if the bare coupling is given a dependence on the cutoff momentum, described by the RG flow equation [4 Here ℏ stands for the dimensionless Planck constant while N = 4 (the computation was performed for the general O(N ) sigma model with target space S N −1 ).Notice that in the continuous limit Λ → ∞ the coupling constant f 2 approaches zero.In turn, the curvature of the sphere to which the fields n j (x 0 , x 1 ) belong vanishes so that the theory becomes non-interacting.This phenomenon, known as asymptotic freedom, indicates consistency of the quantum field theory.As a result of the work of Polyakov and later Zamolodchikov and Zamolodchikov [5], who proposed the associated scattering theory, it is commonly believed that the O(N ) sigma model in 1 + 1 dimensions is a well defined (UV complete) QFT. The renormalizability of general NLSM in 1 + 1 dimensions was discussed in the work of Friedan [6].He considered the class of theories where the Lagrangian density takes the form Here G µν (X) is the metric written in terms of local coordinates X µ on the target space.The couplings are encoded in this metric so that the latter is taken to be dependent on the cutoff Λ. Extending the results of Ecker and Honerkamp [7], Friedan computed the RG flow equation to two loops.To the leading order in ℏ it takes the form where R µν is the Ricci tensor built from the metric.Without the O(ℏ 2 ) term, (1.4) is usually referred to as the Ricci flow equation [8], which is a partial differential equation for G µν = G µν (X | τ ).It found a remarkable application in mathematics in the proof of the Poincaré conjecture [9,10]. The question of renormalizability can be addressed within a class of NLSM where the target space metric depends on a finite number of parameters.The simplest example is the O(N ) sigma model whose target manifold belongs to the family of the (N − 1) dimensional round spheres, characterized by the radius 1/f .In this case, the Ricci flow equation boils down to the ordinary differential equation (1.2).Another example is the Principal Chiral Field (PCF), where the target space is the group manifold of a simple Lie group G equipped with the left/right invariant metric.The latter is unique up to homothety and, in local coordinates, is defined by the relation where U ∈ G, e is the homothety parameter and the angular brackets ⟨•, •⟩ denote the Killing form in the Lie algebra of G. 1 The Ricci flow (1.4) implies with C 2 being the value of the quadratic Casimir in the adjoint representation.This equation was essentially obtained in the original work of Polyakov [4], see also [3].Notice that the SU(2) PCF coincides with the O(4) sigma model.In this case C 2 = 2, while (1.6) and (1.2) are the same provided that e 2 ≡ 2f 2 . An example of an NLSM which is renormalizable within a two parametric family is the so-called anisotropic SU(2) PCF.In this case the SU(2) × SU(2) isometry of the target space is broken down to SU(2) × U(1) and the manifold is still topologically S 3 but equipped with a certain asymmetric metric.The latter is given by where O is an operator acting from the Lie algebra su(2) to itself depending on the additional deformation parameter r, and P 3 projects onto the Cartan subalgebra.The Ricci flow equation reduces to a system of ordinary differential equations on e and r: (1.9) In the domain −1 < r < 0, similar as with the SU(2) PCF, the theory is asymptotically free and it turns out to be a consistent QFT. When the τ dependence of the metric, satisfying the Ricci flow equation, is contained in a finite number of parameters, the partial differential equation (1.4) reduces to a system of ordinary ones.From the point of The integration contour for the Wilson loop can be moved freely along the cylinder.view of physics, this means that the corresponding NLSM depends on a finite number of coupling constants and is one-loop renormalizable within this class.The construction of such solutions is difficult to achieve even when the dimension of the target manifold is low.Among the most impressive early results was the work of Fateev [11], who discovered a three parameter family of metrics solving the Ricci flow equation.The NLSM with this background is a two parameter deformation of the SU(2) PCF, which contains the anisotropic case as a subfamily.A guiding principle for exploring the class of renormalizable NLSM was formulated in the work [12].It arose from the observation that all the above mentioned models turn out to be classically integrable field theories.It is now believed that there is a deep relation between classical integrability and one-loop renormalizability in 1 + 1 dimensional sigma models. The notion of classical integrability in 1 + 1 dimensional field theory requires explanation.Recall that a mechanical system with d degrees of freedom is called integrable (in the Liouville sense) if it possesses d functionally independent Integrals of Motion (IM) in involution.This concept is difficult to extend to a field theory, where the number of degrees of freedom is infinite.A suitable paradigm of integrability in the case of 1 + 1 dimensions arose from the works of the Princeton group [13] and was later developed in the papers of Lax [14] and Zakharov and Shabat [15].A key ingredient is the existence of the so-called Zero Curvature Representation (ZCR) of the Euler-Lagrange equations of the classical field theory: (1.10) ) is a Lie-algebra valued worldsheet connection which also depends on the auxiliary (spectral) parameter λ.The ZCR implies that the Wilson loops where the trace is taken over some matrix representation of the Lie algebra, are unchanged under continuous deformations of the closed contour C. If suitable boundary conditions are imposed, this can be used to generate IM.For instance, in the case when the worldsheet is the cylinder and the connection is single valued, the contour C may be chosen to be the equal-time slice at some x 0 as in fig. 1.Then, it is easy to see that T (λ) does not depend on the choice of x 0 , i.e., it is an integral of motion.Due to the dependence on the arbitrary complex variable λ, T (λ) constitutes a family of IM.The existence of these may provide a starting point for solving the classical equations of motion by applying the inverse scattering transform [16]. For this reason, we say that a 1 + 1 dimensional classical field theory is integrable if it admits the ZCR. 2he theme of this paper is the interplay between classical integrability and one-loop renormalizability in sigma models.Its structure is as follows.Section 2 is devoted to a discussion of the so-called fully anisotropic SU(2) PCF, whose target space metric is given by (1.12) Here P a are projectors onto the basis t a of the Lie algebra su(2), which is taken to be orthogonal w.r.t. the Killing form.The theory is a two parameter deformation of the SU(2) PCF and it reduces to the latter when In addition for the special case I 1 = I 2 it becomes the anisotropic SU(2) PCF, whose target space metric was presented above in eq.(1.7).We discuss the classical integrability of the model with metric (1.12).On the other hand, the latter is shown to be a solution of the Ricci flow equation for a certain τ dependence of the couplings I a = I a (τ ).The corresponding system of ordinary differential equations is derived and its first integrals are obtained.In section 3 the concept of the Poisson-Lie deformation [17], which preserves integrability, is introduced.We apply it to the fully anisotropic SU(2) PCF and obtain a new classically integrable field theory depending on four parameters.It is argued that the resulting model is one-loop renormalizable.The system of ODEs for the τ dependence of the four couplings is presented and explicit analytical expressions for the renormalization group invariants are provided.The last section is devoted to a discussion.Among other things, it contains the formulae for the renormalization group invariants of the fully anisotropic SU(2) PCF with Wess-Zumino term. 2 Fully anisotropic SU(2) PCF Following the lecture notes [18], let us gain some intuition about the fully anisotropic SU(2) PCF by considering its classical mechanics counterpart.It is obtained via 'dimensional reduction' where one restricts to field configurations that depend only on the spacetime variable x 0 so that U = U (x 0 ).Then the Lagrangian density (1.3), (1.12) becomes where ω a are defined through the relation and the dot stands for differentiation w.r.t. the time x 0 .Also, the basis for the Lie algebra has been normalized such that ⟨t a , with ϵ abc being the Levi-Civita symbol and summation over the repeated index is being assumed.It turns out that the Lagrangian (2.1) describes the free motion of a rigid body where the translational degrees of freedom have been ignored. Recall that an arbitrary displacement of a rigid body is a composition of a translation and a rotation.For a free moving top, when the net external force is zero, one can without loss of generality consider the case when the centre of mass is at rest.Introduce two right handed coordinate systems called the fixed (laboratory) frame and moving frame, which are defined by the ordered set of unit vectors (E 1 , E 2 , E 3 ) and (e 1 , e 2 , e 3 ), respectively.The axes of the moving frame coincide with the principal axes of the rigid body w.r.t. the centre of mass.Then the orientation of the body is uniquely specified by a 3×3 special orthogonal matrix which relates the fixed and moving frames as in fig. 2. Thus the configuration space of a rigid body with a fixed point coincides with the group manifold of SO (3).The matrix specifying the rotation can be identified with an SU(2) matrix U taken in the adjoint representation.Mathematically this is expressed as (2.4) Figure 2: The orientation of the rigid body is uniquely specified by the 3D special orthogonal matrix that relates the moving frame (e 1 , e 2 , e 3 ) to the fixed frame (E 1 , E 2 , E 3 ).The axes of the moving frame are chosen to coincide with the principal axes of inertia. where again summation over a = 1, 2, 3 is being assumed.The coefficients ω a defined in (2.2) coincide with the projections of the instantaneous angular velocity ω along the principal axes.This can be seen by differentiating both sides of (2.4) w.r.t.time and comparing the result with ėa = ω × e a . The classical mechanics system governed by the Lagrangian (2.1) is called the Euler top.The parameters I a , which were introduced originally as formal couplings in (1.12), coincide with the principal moments of inertia.Notice that the Lagrangian is built from U −1 U which belongs to the Lie algebra and hence is insensitive to the difference between the groups SU(2) and SO(3). 3he Euler top is a textbook example of a Liouville integrable system.The IM that satisfy the conditions of Liouville's theorem are the Hamiltonian H and two more which are built from the angular momentum M : For a free moving body the angular momentum is conserved, i.e., Ṁ = 0. On the other hand, the total time derivative Ṁ can be written in terms of the canonical Poisson bracket as {H, M }.Hence, the classical observable M Poisson commutes with the Hamiltonian.This way, the three functionally independent involutive Integrals of Motion may be taken to be It follows from Liouville's theorem that the equations of motion for the Euler top can be integrated in quadratures.The solution is discussed in any standard textbook on classical mechanics see, e.g., [19]. The rigid body with two of the principal moments of inertia equal I 1 = I 2 ≡ I is usually referred to as the symmetric top.In this case the Lagrangian (2.1) possesses invariance w.r.t.rotations about the axis e 3 .For the symmetric top it is convenient to choose the three functionally independent, involutive IM to be M 2 , M Z and M 3 ≡ M • e 3 .Notice that the Hamiltonian is given in terms of these as (2.7) The case I 1 = I 2 = I 3 ≡ I is known as the spherical top and the Hamiltonian is proportional to M 2 .The field theory generalization of the symmetric top is the anisotropic SU( 2) PCF (1.3), (1.7), while that of the spherical top is the SU(2) PCF (1.3), (1.5). Remarkably, the fully anisotropic SU(2) PCF is also an integrable field theory according to the technical definition given in the introduction.Namely, the equations of motion for the model admit the Zero Curvature Representation (1.10).To demonstrate the integrability, it is useful to introduce the currents J a i via the formula: Then the Euler-Lagrange equations for the model (1.3), (1.12) can be written as follows: where (a, b, c) is a cyclic permutation of (1, 2, 3) while Note also the kinematic relations (Bianchi identities) which follow directly from the definition (2.8): The worldsheet connection for the fully anisotropic SU(2) PCF is rather complicated.For this reason we give it first for the case I 1 = I 2 = I 3 = 1 2 e −2 which corresponds to the SU(2) PCF.Then the equations of motion (2.9) simplify greatly since the term in the r.h.s.vanishes.The worldsheet connection A ± reads as and one can easily check that as a consequence of eqs.(2.9) and (2.11), This ZCR was first proposed in the work [20] and is valid for the sigma model associated with any simple Lie group G with iJ a ± t a replaced by −U −1 ∂ ± U . The ZCR for the general case with I 1 ̸ = I 2 ̸ = I 3 was found in [21] and presented in a slightly different form in ref. [22].In the following, the conventions of the latter paper will be used.To write the result, we swap the two independent combinations of (I 1 , I 2 , I 3 ) that enter into the equations of motion for m and ν according to where cn(ν, m) is the Jacobi elliptic function with the parameter m.Together with sn and dn, it satisfies the relations sn 2 (ν, m) The flat worldsheet connection reads explicitly as where (2.17) In order to explore the one-loop renormalizability of the fully anisotropic SU(2) PCF, we turn to the analysis of the Ricci flow equation (1.4).It requires one to calculate the Ricci tensor R µν corresponding to the target space metric G µν given in (1.12).The computation is straightforward and we do not present it here.Instead, we mention the identity: where (a, b, c) is a cyclic permutation of (1, 2, 3).Then it follows that the Ricci flow equation is satisfied if the couplings I a are assigned a τ dependence such that (see also refs.[23,24]) This constitutes a set of coupled nonlinear ordinary differential equations describing the flow.Notice that for I 1 = I 2 = 1 2e 2 and I 3 = 1+r 2e 2 one recovers the Ricci flow equations for the anisotropic SU(2) PCF (1.9).The latter reduce to the ones for the SU(2) PCF (1.6) with C 2 = 2 upon setting r = 0. We found that the system (2.19) possesses two Liouvillian first integrals. 4They are given by Here K(m) and E(m) stand for the complete elliptic integrals of the first and second kind, the parameter m is the same as in (2.14), while p coincides with cn 2 (ν, m) from that formula, i.e., The expression (2.20) for the first integrals is one of the original results of this paper. 5After it was obtained, we discovered that the system of differential equations (2.19) had been introduced, in a slightly different form, in the work of Darboux [26].Its solution was discussed in refs.[27,28]. The flow of the couplings I a as a function of τ can be analyzed numerically.The typical behaviour, for generic initial conditions such that all I a at τ = 0 are positive and different, is presented in fig. 3.One observes from the figure that the solution of (2.19), i.e., the Ricci flow equation, remains real and nonsingular only within the finite interval τ ∈ (τ min , τ max ).At the end points one of the couplings goes to zero so that the curvature of the target space blows up.As a result, the one-loop approximation is no longer valid and the perturbative analysis is not sufficient to explore whether or not the model can be defined as a consistent (UV complete) QFT.There exists another three parameter family of deformations of the three dimensional round sphere (1.5).It is the one mentioned in the introduction that was proposed by Fateev in ref. [11].His metric, depending on (e 2 , r, l), can be written as (2.23) Here the operator O, acting on the Lie algebra, is given by where Ad U stands for the adjoint action of the group: The Ricci flow equation (1.4) leads to the system of ordinary differential equations for the three parameters: Notice that for l = 0 the metric (2.23) becomes the one for the anisotropic SU(2) PCF (1.7), while the above system of differential equations reduces to (1.9). A remarkable feature of (2.26) is that it possesses solutions where e 2 (τ ), r(τ ), l(τ ) are real and nonsingular on the half infinite line (−∞, τ max ) with some real τ max .In particular, this always happens when the couplings r and l are restricted as −1 < r(τ ), l(τ ) < 0. Such solutions of the Ricci flow equation, which can be continued to infinite negative τ , are called 'ancient'.That (2.26) admits ancient solutions suggests that the corresponding NLSM is a consistent QFT.The factorized scattering theory for the model was proposed in ref. [11]. The NLSM with metric (2.23) is an integrable classical field theory.The ZCR for the Euler-Lagrange equations was originally obtained in the work [12].This way, the Fateev model provides an additional example of the link between integrability and one-loop renormalizability in sigma models. Poisson-Lie deformation The models discussed above illustrate the connection between integrable NLSM and solutions of the Ricci flow equation.This can be used as a guiding principle for constructing new multiparametric families of metrics that satisfy the Ricci flow.Here we will discuss the so-called Poisson-Lie deformation of integrable NLSM.Such a deformation preserves integrability and allows one to obtain new solutions of the Ricci flow equation.We first illustrate the idea by showing that the anisotropic SU(2) PCF can be obtained as the Poisson-Lie deformation of the SU(2) PCF [17].Then, a new integrable model is constructed by deforming the fully anisotropic SU(2) PCF. Poisson-Lie deformation of PCF To explain the Poisson-Lie deformation, we start from the Hamiltonian formulation of the model.The latter, in the case of the SU(2) PCF, can be described using the currents J a i (2.8).It follows from the Lagrangian (1.5), (1.3) that they form a closed Poisson algebra [16]: {J a 1 (x), J b 1 (y)} = 0 .These are understood to be equal-time relations with x 0 = y 0 , while x ≡ x 1 and y ≡ y 1 are the space coordinates (the dependence of the currents on the time variable has been suppressed).The Hamiltonian is obtained by means of the Legendre transform and is given by One can check that the Hamiltonian equations of motion, Ȯ = {H, O}, for the currents are equivalent to eqs.(2.9) and (2.11) with I 1 = I 2 = I 3 , i.e., The Poisson algebra (3.1) admits a certain deformation which preserves its defining properties, namely, skew-symmetry, the Jacobi and Leibniz identities.The deformed Poisson bracket relations read explicitly as Here r plays the role of the deformation parameter and we switch the notation from J a i to Ja i as the above Poisson brackets will be associated with a different classical field theory.Remarkably, with the same form of the Hamiltonian as (3. the equations of motion do not depend on the deformation parameter.Namely, they coincide with (3.3) upon replacing J a ± by Ja ± = 1 2 ( Ja 0 ± Ja 1 ).This means that the Hamiltonian system defined through (3.4) and (3.5) is integrable by construction.The corresponding flat connection entering into the ZCR takes the same form as for the SU(2) PCF (2.12) but written in terms of the currents Ja ± : The obtained classical field theory is called the Poisson-Lie deformation of the SU(2) PCF.The final and technically most involved step of the procedure is to derive the Lagrangian of the deformed model. It is well known in classical mechanics how to get from the Hamiltonian to the Lagrangian picture.Consider a mechanical system with a finite number of degrees of freedom d.The Poisson brackets are defined on the algebra of functions on the 2d-dimensional phase space.In local coordinates (z 1 , . . ., z 2d ) they are given by Since the Poisson brackets are assumed to be non-degenerate, the inverse of the contravariant tensor Ω AB exists and we will denote it as Ω AB .Due to skew-symmetry of the Poisson brackets, the covariant tensor Ω AB is antisymmetric, i.e., it defines a two-form as Ω = Ω AB dz A ∧ dz B .Moreover, the Jacobi identity implies that the form is closed, dΩ = 0.This allows one to write Ω as an exact form, Ω = dα, at least locally.The action is expressed in terms of the one-form α and the Hamiltonian as with the integral being taken over a path in the phase space parameterized by the time t.According to the Darboux theorem there exists (locally) a set of canonical variables (q 1 , . . ., q d , p 1 , . . .This can be interpreted as the Legendre transform of H where the canonical momenta p m are replaced by qm as the independent variables. In order to apply the above procedure to the infinite dimensional Hamiltonian structure (3.4), (3.5), it is useful to realize the Poisson algebra in terms of the fields, similar to the canonical variables p m and q m in the finite dimensional case.For this reason we introduce local coordinates X µ on the group manifold and the corresponding canonical momentum densities Π µ .They obey the Poisson bracket relations In the case r = 0, when (3.4) becomes the undeformed algebra (3.1), the currents can be expressed in terms of the canonical fields in the following way; first, define the 3 × 3 matrix E a µ through the relation dU Its inverse will be denoted by E µa so that E a µ E µb = δ ab .Then with the choice one can check via a direct computation that the Poisson algebra (3.1) with J a i replaced by the components of K i is satisfied.In fact, the r.h.s. of the first equation in (3.13) is just −i ∂ 0 U U −1 written in terms of the canonical fields for the PCF. 7or general r ̸ = 0 one should first apply the linear transformation This brings the closed Poisson algebra (3.4) to the form: where which is a direct sum of two independent so-called SU(2) current algebras.It turns out that the Poisson algebra generated by I a and J a can be formally realised in terms of the currents K i (3.13) as well as the group valued field U ∈ SU(2).The explicit formula, along with its verification, is contained in ref. [29] and is given by Here Ad U stands for the adjoint action of the group, see eq. (2.25), while the linear operator R : su(2) → su( 2) is defined via its action on the generators as Formulae (3.14), (3.17) and (3.13) allow one to realize the currents Ja 0 and Ja 1 , satisfying the Poisson bracket relations (3.4), through the canonical fields (3.10).The corresponding expression for the Hamiltonian follows from (3.5).In the basis of canonical variables the construction of the Lagrangian is straightforward and is the field theory analogue of the Lengedre transform (3.9).Applying the procedure, where Π µ maps to Ẋµ = { H, X µ }, one arrives at the Lagrangian density Here the dependence on e 2 was restored and we performed the substitution e 2 → (1 + r) e 2 to keep with the conventions of section 1. At first glance, in local coordinates, L can not be written in the form (1.3).Instead, the latter should be modified as Here the last term is not invariant w.r.t. the parity transformation x 1 → −x 1 , i.e., ∂ ± → ∂ ∓ and comes about because the Lagrangian density (3.19) is not either.Models of this type motivate a generalization of the NLSM where the target space is additionally equipped with a two form B = B µν dX µ ∧ dX ν known as the B-field [30].It turns out that in the SU(2) case the B-field corresponding to L (3.19) is a closed form (in fact, exact).As a result, the term ∝ B µν in (3.20) is a total derivative and has no effect on the Euler-Lagrange equations.This way, for the SU(2) case, the obtained sigma model is equivalently described by (1.3) where and 2) stands for the projector on the Cartan subalgebra generated by t 3 .This way we arrive at the metric of the anisotropic SU(2) PCF (1.7). It was discussed in section 1 that the anisotropic SU(2) PCF is a integrable classical field theory.Having established that the model is a Poisson-Lie deformation of the SU(2) PCF, we obtain a way to derive the Zero Curvature Representation for the classical equations of motion.Namely, the flat connection is given by (3.6) where the currents Ja ± = 1 2 ( Ja 0 ± Ja 1 ) entering therein read as Indeed, as it follows from the Euler-Lagrange equations for the model (3.19), The following comment is in order here.The anisotropic SU(2) PCF admits an integrable generalization, where U belongs to an arbitrary simple Lie group G.The Lagrangian is still given by (3.19) with R being a certain linear operator which is usually referred to as the Yang-Baxter operator.It acts on the Lie algebra g = Lie(G) and is required to satisfy a skew symmetry condition and the so-called modified Yang-Baxter equation [31].A possible choice obeying the two properties is specified using the Cartan-Weyl decomposition of the simple Lie algebra, g = n + ⊕h⊕n − , where h stands for the Cartan subalgebra and n ± are the nilpotent ones.Namely, the linear operator R is unambiguously defined through the conditions The NLSM (3.19) with R being the Yang-Baxter operator was introduced by Klimçík in ref. [32] who called it the Yang-Baxter sigma model.Written in terms of local coordinates, the Lagrangian takes the form (3.20) where, for general group, the second term ∝ B µν is no longer a total derivative and cannot be ignored.The model is classically integrable and the corresponding flat connection is given by the same formulae (3.6) and (3.22) [17]. The Yang-Baxter sigma model also turns out to be a one-loop renormalizable field theory.The proof is based on the extension of the results of the works [6,7] to the case of an NLSM equipped with a B-field that was carried out in ref. [30], see also the textbook [33].The one-loop RG flow equations are modified from (1.4) as Here H µνλ are the components of the so-called torsion tensor.It is given by the exterior derivative of the B-field, i.e., For the model (3.19), (3.20) with U belonging to a simple Lie group, the above equations boil down to a system of ordinary differential equations on e 2 and r.They read as [34] − where, remarkably, the only dependence on the group appears through an overall factor proportional to the value of the quadratic Casimir in the adjoint representation.Note that in the domain −1 < r < 0, for which the system (3.27)possesses ancient solutions, the deformation parameter √ r entering into the Lagrangian of the Yang-Baxter sigma model (3.19) is an imaginary number.The corresponding target-space metric (3.21) remains real.However, the torsion tensor (3.26), which is non-vanishing outside the SU(2) case, becomes purely imaginary (a related discussion is contained in Appendix A of ref. [35]). We have just discussed that the Poisson-Lie deformation of the PCF yields the Yang-Baxter sigma model.The latter itself can be deformed along the similar line of arguments [17], see also [35] as well as fig. 4 for a summary.In the case of G = SU(2) the obtained theory turns out to be the Fateev model, i.e., the sigma model with target space metric (2.23).For a general simple Lie group G, the two parameter deformation of the PCF was introduced by Klimčík in ref. [17].The corresponding Lagrangian involves the Yang-Baxter operator R and is given by For U ∈ SU(2) the B-field turns out to define a closed two form and has no effect on the equations of motion.It was shown in [36] by an explicit computation that the metric is equivalent to (2.23).For arbitrary simple Lie group G the model (3.28) is classically integrable and the ZCR was found in ref. [37]. One-loop renormalizability was demonstrated in the work [38] using the results of [39].The differential equations describing the flow of the couplings (e 2 , r, l) are They essentially coincide with (2.26) which were derived in Fateev's original paper [11]. Poisson-Lie deformation of fully anisotropic SU(2) PCF Here we obtain a new clasically integrable NLSM as a Poisson-Lie deformation of the fully anisotropic SU(2) PCF.The procedure closely follows that which was explained above on the example of the SU(2) PCF. The Hamiltonian for the fully anisotropic SU(2) PCF (1.3), (1.12), written in terms of the currents (2.8), is given by while the equal-time Poisson bracket relations for J a i read as The above Poisson algebra admits a deformation of the form depending on the extra parameter ξ.Then, with the Hamiltonian H = 1 2 which is formally the same as (3.31) but expressed in terms of the new currents Ja i , the Hamiltonian equations of motion imply Here (a, b, c) = perm(1, 2, 3) and summation over repeated indices is not being assumed.The equations (3.35) are equivalent to (2.9), (2.11) up to the replacement J a i → Ja i . The currents Ja i obeying the Poisson bracket relations (3.33) can be realized in terms of the fields X µ and Π µ subject to the canonical commutation relations (3.10).This is done along the same line of arguments as was discussed in the previous subsection.Namely, one first considers certain linear combinations of Ja i which obey two independent copies of the classical SU(2) current algebra (3.15) with k being a certain function of the couplings I a and deformation parameter ξ.Then realizing I and J in terms of the canonical variables (see formulae (3.17), (3.13)) and performing the Legendre transform of the Hamiltonian (3.34), one obtains the Lagrangian of the deformed theory.The result of the calculations reads as where a certain choice of the overall multiplicative factor for the Lagrangian density was made.Here and below we use the notation O ± for the linear operators acting on the Lie algebra su(2) given by The Lagrangian density (3.36) is formally not invariant under the parity transformation x 1 → −x 1 (so that ∂ ± → ∂ ∓ ).Nevertheless, the theory possesses this symmetry.The reason is because in local coordinates, where L (3.36) takes the form (3.20), the term ∝ B µν turns out to be a total derivative.Thus one is free to replace O + in (3.36) by where the transposition is defined by the condition ⟨x , O + y⟩ = ⟨O T + x , y⟩ for any x, y ∈ su(2).This way, the target space metric for the deformed sigma model can be written as It is worth mentioning that for I 1 = I 2 this becomes the Fateev metric (2.23), (2.24) upon the identification of parameters: By construction the obtained model (3.36) is a classically integrable field theory.The corresponding flat connection takes the same form as for the fully anisotropic SU(2) PCF, i.e., where the functions w a (λ) are given in (2.14) and (2.17).The formula for the currents Ja ± in terms of the SU(2) element U reads as The latter obey the RG flow equations .49) The two first integrals of the system (3.46)read as where Π( m, m) is the complete elliptic integral of the third kind: Summary and discussion In this work we explored the interplay between integrability and one-loop renormalizability for NLSM in 1 + 1 dimensional spacetime.Our main example was the fully anisotropic SU(2) PCF.On the one hand, it was explained that this is a clasically integrable field theory and the Zero Curvature Representation for its equations of motion was reviewed.On the other, the corresponding target space metric satisfies the Ricci flow equation (1.4) so that the fully anisotropic SU(2) PCF is one-loop renormalizable within a three dimensional space of couplings.The system of ODEs describing the flow was derived and its full set of first integrals was obtained, independently from [27,28]. Another main result is the construction of a classically integrable NLSM depending on four parameters whose Lagrangian density is given by (3.36)- (3.38).It was found by applying a Poisson-Lie deformation to the fully anisotropic SU(2) PCF.The corresponding target space metric turned out to provide a new solution to the Ricci flow equation.The first integrals to the system of ODEs (3.44) and (3.46), which describe the flow of the four couplings, were derived in the course of this work and are given in (3.50). The class of theories that we discussed admit a modification such that they remain one-loop renormalizable.This is achieved by adding the so-called Wess-Zumino term to the action.The Lagrangian takes the form (3.20) with the B-field no longer being exact.This implies that the target space, together with the Riemannian metric G µν , is equipped with the affine connection, where the torsion H = dB is nonvanishing [30].In the case of SU(2), the 3-form H is proportional to the volume form for the group and can be written as Here k is an additional parameter of the model.In the classical theory there is no contraint on the values it may take, however, upon quantization it is required to be an RG invariant and, furthermore, must be an integer [40].For the case of the fully anisotropic SU(2) PCF with Wess-Zumino term, the one-loop RG flow equations (3.25) imply the system of ODEs for the couplings: It possesses two Liouvillian first integrals, which are a simple generalization of (2.20) and in terms of p and m (2.22) take the form A complete analysis of the behaviour of the solutions to (4.2) has not been carried out yet.Moreover, the classical integrability of the model has not been established and the Zero Curvature Representation, if it exists, remains unknown to us.These would be interesting questions to pursue in future work.They can also be addressed for the Poisson-Lie deformed theory. Our work was mainly focused on sigma models associated with the Lie group SU (2).Nevertheless, we expect it to be possible to generalize the Poisson-Lie deformed theory constructed here to the case of higher rank Lie groups.One way to approach the problem uses the results of ref. [41].In that paper, a classically integrable NLSM is introduced, which is a two parameter deformation of the PCF for Lie group SL(N ).For N = 2 it coincides with the fully anisotropic SU(2) PCF (upon an appropriate choice of reality conditions on the fields and parameters).We expect that this sigma model may also be deformed along the line of arguments presented in sec.3. Another possibility for constructing integrable deformations, based on the formalism of the so-called affine Gaudin model, is mentioned in the perspectives section of ref. [41]. Finally, classically integrable multiparametric families of sigma models are of interest to string theory.In particular, the possibility of an integrable elliptic deformation of strings on Ad 3 × S 3 × T 4 was investigated in the recent paper [42]. 1 1 ( 3 . 5 ) 6 Ja In our discussion of the Poisson-Lie deformation we set e 2 = 1.Since it appears in an overall factor multiplying the Lagrangian, this has no effect on the classical equations of motion. p d ) such that α = d m=1 p m dq m .Then the Lagrangian associated with the action (3.8) is given by Figure 4 : Figure 4: The relation between the various models.The Poisson-Lie deformation is represented by an arrow.
9,241.4
2024-06-26T00:00:00.000
[ "Physics" ]
Thymosin Alpha1-Fc Modulates the Immune System and Down-regulates the Progression of Melanoma and Breast Cancer with a Prolonged Half-life Thymosin alpha 1 (Tα1) is a biological response modifier that has been introduced into markets for treating several diseases. Given the short serum half-life of Tα1 and the rapid development of Fc fusion proteins, we used genetic engineering method to construct the recombinant plasmid to express Tα1-Fc (Fc domain of human IgG4) fusion protein. A single-factor experiment was performed with different inducers of varying concentrations for different times to get the optimal condition of induced expression. Pure proteins higher than 90.3% were obtained by using 5 mM lactose for 4 h with a final production about 160.4 mg/L. The in vivo serum half-life of Tα1-Fc is 25 h, almost 13 times longer than Tα1 in mice models. Also, the long-acting protein has a stronger activity in repairing immune injury through increasing number of lymphocytes. Tα1-Fc displayed a more effective antitumor activity in the 4T1 and B16F10 tumor xenograft models by upregulating CD86 expression, secreting IFN-γ and IL-2, and increasing the number of tumor-infiltrating CD4+ T and CD8+ T cells. Our study on the novel modified Tα1 with the Fc segment provides valuable information for the development of new immunotherapy in cancer. Since the discovery of thymosin alpha 1 (Tα1) in the 1970s, several studies have been investigating on Tα1. Tα1 (brand name: ZADXIN, INN: thymalfasin) is a small molecule polypeptide with 28 amino acids at about 3.1 kDa 1 . Tα1 acts through Toll-like receptors (TLR2 and TLR9) in myeloid and plasmacytoid DCs (dentritic cells) 2 , leading to the activation and differentiation of DCs and T cells, as well as the initiation of cytokines, such as interferon-gamma (IFN-γ) and interleukin-2 (IL-2) 3 . Also, Tα1 can antagonize the dexamethasone (DEX)-induced apoptosis of CD4+CD8+ thymocytes 4 and the hydrocortisone (HC)-induced decrease in the thymus index and spleen index 5 . Moreover, Tα1 has been evaluated for its activities in hepatitis B and C 6-8 , cystic fibrosis 9 , cancer 10,11 , immune deficiency 12 , and HIV/AIDS 13 . The short serum half-life of Tα1 is no more than 2 h with a poor tumor penetration that limits its clinical use. Combinations of Tα1 and peginterferon α-2a as well as of Tα1 and DEX have made some achievements 14,15 . Among the strategies of extending serum half-life in the body, adding an immunoglobin G (IgG) Fc fragment is one of the most effective technologies. The Fc fragment exhibits therapeutic improvement by interacting with FcRn resulting in the delayed lysosomal degradation of immunoglobulins by cycling them back into circulation and in a prolonged half-life as described above [16][17][18] . In the production aspect, recombinant expression of Fc-fusion proteins offer a relatively high content 16 . Moreover, Fc region can be leveraged for its high reversible affinity to staphylococcal protein A or streptococcal protein G 19 . His6-tag was introduced into the fusion protein for purification by using nickel ion affinity chromatography. So far, 11 Fc-fusion proteins have been approved by FDA 20 and more than 300 have been studied. In this study, Tα1-Fc is designed by introducing the C-terminus of Tα1 to the hinge of IgG4 Fc for the extension of half-life. The recombinant protein was investigated on an optimum induced condition and further be purified for the next study on in vivo activities. Rats were treated by vein injection to determine the half-life. Moreover, anti-tumor activity was evaluated on 4T1 and B16F10 xenograft tumor models by exploring Tα1-Fc effects on tumor inhibition and cytokine expression. Results The optimum expression condition of Tα1-Fc. Plasmid pET32a (+) with inserted Trx tag and His 6 tag was used as a proper expression vector for soluble fusion protein expression 21 . This study, as a single-factor experiment, was performed using IPTG or lactose with different induction times and determined by SDS-PAGE following ImageJ analysis. The fusion protein was expressed in the supernatant by using 1 mM IPTG and 5 mM lactose with a protein content of about 30.5% and 33.3%, respectively, which suggest a soluble expression (Fig. 1A) ( Fig. 1A and B gels cropped from different parts of the same gel, full-length of Fig. 1A and B gels corresponding to Supplementary Fig. S1); the molecular weight ranged from 42.7 kDa to 66.2 kDa, which are consistent with the theoretical value. Figure 1B (see Supplementary Fig. S2) was performed to exclude the interference of empty pET32a vector induced expression. Proteins about 17 kDa was mainly expressed in the supernatant of negative control. And there is negligible impact of vector itself on the expression of Tα1-Fc. With an increased lactose concentration (i.e., 2.5 mM, 5 mM, 7.5 mM, and 10 mM), the protein contents were 21.6%, 22.3%, 18.6% and 18.3%, respectively (Fig. 1C) (see Supplementary Fig. S1). With the gradual extension of the induction time (i.e., 2 h, 4 h, 6 h, and 8 h), the protein content was about 23.2%, 37.8%, 30.5%, and 28.8% (Fig. 1D) (see Supplementary Fig. S3); hence, the following induced expression was performed for 4 h. In summary, this soluble expression of recombinant Tα1-Fc in Escherichia coli reached the highest level of about 45.8% when incubated with 5 mM lactose for 4 h (Fig. 2B) (see Supplementary Fig. S5). Identification of Tα1-Fc. With the increased concentration of imidazole, the competition of imidazole with the polyhistidine tag in binding with the Ni 2+ column gets stronger, and the content of the target protein being eluted increased. In Fig. 2A (Fig. 2A gel cropped from different gels, full-length of Fig. 2A see Supplementary Fig. S4 and Supplementary Fig. S5), clear bands can be seen with 100 mM and 200 mM imidazole, especially 200 mM. Therefore, the solution eluted by 100 mM and 200 mM imidazole was collected and then desalted using Sephadex G-25. The lyophilized powder is white floc, and protein production reached 160.4 mg/L. Contents of the induced protein, purified protein, and lyophilized powder were 45.8%, 90.3%, and 92.5%, respectively (Fig. 2B). The expressed product shows an ability to combine with IgG4 (Fig. 2C). Moreover, mass spectrometry analysis result showed that the MW of this fusion protein Tα1-Fc is about 45937 Da which is very close to the theoretical MW value (Fig. 2D). In general, this protein that we induced is the one that we designed. Tα1-Fc shows a longer serum half-life. Half-life extension is dominated by strategies utilizing albumin binding or fusion; which is the fusion of an immunoglobulin Fc region and PEGylation. Result shows that the serum half-life of Tα1-Fc was 24.58 h, which is almost 13 times longer than of Tα1 (Fig. 3). Peak concentration of Tα1 and Tα1-Fc occurred at 1.5 and 13 h with a concentration of 74.347 ng/L and 118.896 ng/L, respectively. The relative bioavailability of Tα1-Fc is about 90.70% ( Table 1). The implementation of half-life extension strategies Tα1-Fc restored immune system of immunocompromised mice induced by hydrocortisone. Hydrocortisone is a glucocorticoid that can affect the level of endogenous GCs and interfere with the proliferation and differentiation of lymphocytes resulting in immune injury. As the spleen and the thymus are the main immune organs of the human body, immunocompromised mice model was built by injecting HC for a week and then were treated by drugs to examine the immunological activity of the fusion protein. In Table 2, the spleen index and thymus index decreased sharply when treated with HC. The spleen index of the drug group increased to 7.12-7.69 mg/g, which is close to the blank group compared with the normal control group (PBS). A significant difference was found in the spleen index and the thymus index of Tα1-Fc (p < 0.01). The thymus index of Tα1-Fc was 0.87 ± 0.21 mg/g, whereas 0.60 ± 0.23 mg/g of Tα1 compared with 0.59 ± 0.14 mg/g of PBS. Similarly, the spleen index also abided by the order of Tα1-Fc, Tα1, PBS from strong to weak; these two indexes suggest that Tα1-Fc has a stronger activity in repairing the impaired spleen and impaired thymus than Tα1. H & E was used to detect whether the cells are damaged; normal thymocytes appeared to be dark blue, and the damaged one is light pink. The pink area accounts for a large part, which reveals that the number and the condition of thymocytes in immunocompromised mice treated by HC declined significantly (Fig. 4). Color from light pink to deep purple are groups treated with PBS, Tα1, and Tα1-Fc, as well as the normal control group; this finding shows that Tα1-Fc has a stronger function in repairing the damaged thymus than among Tα1. Tα1-Fc exhibited a better anti-tumor activity than Tα1 on 4T1 mouse mammary tumor xenografts. 4T1 tumor model in BALB/c mice is an animal model for stage IV human breast cancer that closely mimics human breast cancer. The growth of 4T1 tumor was relatively slow in the first 6 days after the first drug injection. As time progressed, a difference between independent groups became larger. On day 13, mice were treated with cervical dislocation to obtain the tumor entity ( Fig. 5A) and peripheral blood. The average tumor volume of PBS group reached 1,097.19 ± 327.51 mm 3 , whereas that of Tα1, Tα1-Fc, and Tax is 687.61 ± 199.08, 602.84 ± 138.99, and 560.74 ± 112.49 mm 3 , respectively (Fig. 5B). The tumor weight trend was consistent with the volume growth trend, which obtained PBS > Tα1 > Tα1-Fc > Tax at 1.63 ± 0.48, 0.99 ± 0.40, 0.93 ± 0.29, and 0.77 ± 0.22 g, respectively (Fig. 5C). The inhibitory activity of Tα1 and Tα1-Fc was 37.33% and 45.06% on tumor volume and 39.31% and 42.96% on tumor weight (Fig. 5D, Table 3), respectively. In comparison with PBS, Tα1 and Tα1-Fc inhibited the tumor growth strongly with no significant side effects (p = 0.0171, p = 0.0032). However, mice treated with Tax appeared to be thin and have lost weight. Weight of mice treated with Tax was 17.94 ± 0.78 g, whereas that of the PBS group was 18.82 ± 1.49 g (Fig. 5E). In this finding, Tα1-Fc and Tax showed a significant difference compared with PBS. In 4T1 xenograft tumor models, the s.c. administration of Tα1-Fc showed a stronger inhibitory activity than Tα1 on tumor volume and tumor weight. Cytokine secretions, e.g. IFN-γ and IL-2, can be regulated by Tα1. IFN-γ has the capability to modulate the immune response against a variety of antigens, whereas IL-2, as an anti-inflammatory cytokine, can boost the host immunity against cancer 23,24 . ELISA was performed to detect concentrations of IFN-γ and IL-2 in peripheral blood. Cytokine concentration was increased in either Tα1 group or Tα1-Fc group, especially the IFN-γ concentration of Tα1-Fc (p = 0.00003). Also, the IL-2 concentration of Tα1-Fc had a significant difference with that of Tα1 (p = 0.0389) (Fig. 5F). In all, Tα1-Fc took advantage of Tα1 in the secretion of cytokines IFN-γ and IL-2. The H&E staining slices of 4T1 tumor are shown in Fig. 6A. The vast majority of 4T1 cells in the PBS group were in a good proliferative state with a nuclear structure. The partially pathologic mitotic phase is marked by CD4 is a co-receptor for Ag recognition and presentation, whereas CD8+ T cells can lysis tumor cells. In this study, CD4, CD8, and CD86 were detected by IHC. Results are shown in Fig. 6B. In 4T1 tumor models, mice treated with Tα1-Fc, compared with Tα1, showed an increased expression of CD86 and promoted CD4+ T lymphocytes and CD8+ T lymphocytes infiltrating tumor tissues. Tα1-Fc displayed a stronger tumor growth inhibitory on melanoma compared with Tα1. The melanoma growth is explosive and it took only nine days from the first administration to be executed. A solid tumor photo is shown in Fig. 7A. On day 9, the average tumor volume of PBS was about 1,200 mm 3 , whereas those of Tα1, Tα1-Fc, and Tax are 881.71 ± 305.2, 761.02 ± 239.85, and 518.21 ± 280.74 mm 3 (Fig. 7B), respectively. Administration of Tα1-Fc and Tax significantly reduced the tumor volume with P value of 0.009 and 0.008, respectively. Also, the tumor weight of Tα1-Fc (0.9420 ± 0.2152) showed a significant decline than of Tα1 (1.3810 ± 0.4859, p = 0.0494) (Fig. 7C). The tumor inhibitory rate is 27.25%, 37.21%, and 57.24% of Tα1, Tα1-Fc, and Tax, respectively (Fig. 7D, Table 4). The average mice weight of Tax declined about 0.33 g from the fifth day to the ninth day, whereas other groups basically remained stable (Fig. 7E). Other side effects, such as poor appetite, were observed just like on the 4T1 models treated by Tax. Tα1-Fc had no effect on mouse weight. Thus, Tα1-Fc exerted a better anti-tumor activity compared with Tα1 on melanoma. On melanoma models, IFN-γ and IL-2 were upregulated by either Tα1 or Tα1-Fc in which the concentration of the two cytokines of the Tα1-Fc group showed a significant difference compared with that of the PBS group (p = 0.0016, p = 0.0032) (Fig. 7F). Findings show that the concentration of IFN-γ and IL-2 stimulated by Tα1-Fc increased by several times compared with the Tα1 group. These results suggest that Tα1-Fc may inhibit tumor progression by the secretion of cytokines IFN-γ and IL-2. Necrosis in B16F10 tumor tissues is shown in Fig. 8A. A part of B16F10 cells were in the mitotic phase in the PBS group. Local tumor tissue necrosis appeared with some inflammatory cells infiltrating in the Tα1 group, whereas that of the Tα1-Fc group was larger with more shrinking cells. On the aspect of CD molecular expression, increased CD4 and CD86 were detected on the melanoma models treated with Tα1 and Tα1-Fc (Fig. 8B). Tα1-Fc showed a stronger effect on CD4+ T lymphocyte infiltration than that of Tα1. More CD86 were observed in the necrosis treated with Tα1-Fc. The background color interfered with the detection of CD8 when DAB staining was used in melanoma; hence, ACE was chosen for staining CD8. Tα1-Fc promoted CD8+ T lymphocytes, which infiltrate tumor tissues; this finding is comparable with that of Tα1 group. In summary, Tα1-Fc inhibited the tumor growth on melanoma with an increased expression of IFN-γ, IL-2, and CD86 and of tumor-infiltrating CD4+ T and CD8+ T cells. Discussion Tumors often occur in immunosuppressed individuals with declined DC functions 25 . Cellular immune response efficiency depends on Ag capture, processing, delivery to lymph nodes, and presentation to effector cells of the adaptive immune system. Tα1, as an immunomodulator, has dual effects on DC functions in sensing infection and tissue stress through stimulating TLR agonists 26 and on tumor cells by upregulating major histocompatibility complex class-I Ag expression in normal and transformed cells that resulted in an increased Ag presentation. In addition, the production of cytokines intervenes tumor progression and development. Tα1 was recently proved to bind with human serum albumin (HSA) 27 . HSA is an important protein in serum and serves as a carrier for many drugs and peptides. The binding of Tα1 to HSA might help to diffuse Tα1 along the blood circulation. These results shed the light on pharmacokinetic properties of Tα1. To find out whether the Tα1-Fc maintain these pharmacokinetic properties or not, we also plan to study the pharmacokinetic aspects of Tα1-Fc, and its mechanism in the next step in the future. Tα1 was also recently proven to interact with hyaluronic acid (HA) by its C-terminal sequence LKEKK 28 . Tα1 shares the similar sequences with CD44 and RHAMM which both can bind with HA. HA, a kind of glycosaminoglycan, plays an important role in a variety of diseases, and developmental and physiological processes. Tα1 was proven to inhibit the HA-CD44 or HA-RHAMM interactions and then suppress tumor progression. Based on these findings, further research on the interaction of Tα1-Fc with receptors or extracellular matrix components like HA need to be explored in the future for a better understanding of the immune and antitumor mechanisms. Tα1 is a natural circulating hormone peptide capable of influencing many components of the inflammatory/ autoimmune cascade at a time. Considering the short half-life of Tα1, we constructed a fusion protein of Tα1 and IgG Fc fragment. Most antibodies approved by FDA are composed of IgG1, in that IgG1 shows a stronger affinity to FcγR than IgG4 which can induce ADCC to enhance the antitumor activity. Among these antibodies, Portrazza, Perjeta and Erbitux are the most famous used in the treatment in HNSCC, breast cancer and CRC, respectively. However, Tα1 is a non-target protein. In order to reduce the ADCC and CDC caused by the combination of Fc and FcγR, IgG4 was chosen. The fusion protein glucagon-like peptide-1 (GLP-1) is one of many success stories by introducing IgG4-Fc fragment 29 . For now, there are some other IgG4-Fc fusion proteins been put On the other hand, the introduction of the Fc region allows the binding with FcRn that prevents IgG dissociation from FcRn and release into the bloodstream, rather than directing IgG into a degradation pathway 16 ; this finding results in an increased MW larger than the glomerular filtration value of about 69 kDa 18 . Improved pharmacokinetic properties contribute efforts for clinical use of Tα1 30 . However, serum Tα1 levels varied considerably among different individuals and different diseases [31][32][33] , and it is difficult to discriminate endogenous protein from exogenous protein by ELISA. Given individual differences in mice, the circulating levels of endogenous Tα1 in relation to the exogenously administered Tα1-Fc will be determined by using radiolabeled proteins in the future. In addition, Tα1 restores NK activity and reconstructs cell immunity in immunosuppressed mice 12 . In this study, we evaluated the immune function in immunocompromised ICR mice. One week after HC withdrawal, the thymus and the spleen did not regain their normal size. Thymus is the major organ for producing T lymphocytes and numerous cytokines and thymic hormones. Our findings showed that immunosuppressed mice treated with Tα1 and Tα1-Fc improved varying degrees, specifically in the thymus index and the spleen index. Tα1-Fc can also regulate the immune system by stimulating cytokine production, such as IFN-γ and IL-2. Certainly, the recombinant protein exhibited a better activity on reconstructing the immune system compared with synthetic peptide Tα1. Tα1 has been proved effective in several cancers, such as lung cancer 34 , colon cancer 35 , melanoma 10 , and breast cancer 36 . In this study, we investigated the in vivo antitumor activity of Tα1-Fc on B16F10 and 4T1 tumor models. The tumor volume trend chart and tumor weight chart all demonstrated that Tα1-Fc can inhibit tumor growth stronger than Tα1. The CD4 and CD8 co-receptors are predominantly expressed on the surface of T helper cells (Th) and cytotoxic lymphocytes (CTL), respectively. Immune response requires CD4 for Ag recognition in cooperation with CD8 for tumor elimination. However, CD8 T cells with low avidity for tumor Ag were inefficient in tumor invasion 37 . Studies have proved that CD4+ T cells exert antitumor activities by activating and recruiting macrophages and eosinophils, which produce tumor-destroying free radicals and induce the secondary expansion and accumulation of CD8+ T cells by co-expressing IL-21 and IFN-γ 38,39 . In addition, Th1 cells in the early stages were changed to Treg and Th17 cells in the late stages of the breast cancer development as instructed by In vivo findings detailed in this paper reinforce the validity of this recombinant protein as an immune-enhancing agent and an antitumor compound by stimulating the secretion of cytokines and by upregulating CD86 to a modest number. These findings strongly encourage the further exploitation of Tα1-Fc in clinical use for cancer therapy. Conclusion In general, the recombinant protein we produced has a stronger activity in stimulating cytokine secretion and repairing damaged immunity system. Our findings also demonstrate a better tumor inhibitory on melanoma and 4T1 mouse mammary tumor xenograft. These findings reinforce the potential use of Tα1-Fc as a promising antitumor compound against different tumors. Materials and Methods Materials. 4T1 Nickel ion affinity chromatography. Most Fc antibodies can be purified via standard protein A or G chromatography. However, the content and production of Tα1-Fc obtained by Protein A or G are low. Owing to the His6-tag in the expressed product, we can purify Tα1-Fc by using Ni2+ affinity chromatography 21,44 . The sample solution of the two columns was added into the Ni2+ column for combining with Ni2+ in a fixed rate; afterwards, the imidazole solution of the gradient concentration of about ten columns was added successively to compete with the His6 combination. Elution was analyzed by using SDS-PAGE. To obtain lyophilized protein powder, the elution was desalted with 0.1 mol/L ammonium bicarbonate by initially using Sephadex G-25, and the elution was then lyophilized later. Western blotting. Acrylamide concentrations of concentrating and separating gels for SDS-PAGE were 5% and 15%, respectively. A total of 20 µL sample and 5 µL 5 × Loading Buffer were mixed and then denatured in boiling water for 5 min. The mixture was added into gel wells with 10 µL. The gels were run at 80 V and then 120 V when the bands reached the separation of gels and bottom of gels. All gels were imaged using Image Master VDS. After SDS-PAGE, gels were transferred to nitrocellulose membranes and run at 160 mA for 1.5 h at 4 °C, then blocked in 5% skim milk for an hour. Primary anti-huaman-IgG4 antibodies and secondary antibodies were successively added for 2 and 1.5 h incubations, respectively. Detection was performed by using ECL system. Mass spectrometry. Protein identification was performed with a UltraflexTOF/TOF mass spectrometer (Bruker Daltonics Co., Ltd). The instrument was operated in linear positive mode. Protein eluted from Ni2+ affinity column was identified as pure by SDS-PAGE. Sinapinic acid was prepared in a saturated solution of TA50 (0.1% TFA) and added to the sample droplet in a 1:1 ratio (v:v). The mass spectrometric data are technical triplicates by three times sample-injecting. In vivo determination of serum half-life. Wistar rats were treated with a single intravenous injection at a dose of 0.057128 µmol/kg drug/rat. Peripheral blood was subsequently collected with sodium citrate for anticoagulation after 10 min, 30 Immunocompromised mice models. Mice were treated with 50 mg/kg HC via subcutaneous injection every day for a week and then divided into three groups (PBS, Tα1, and Tα1-Fc 0.081532 µmol/kg) randomly when the rats obtained body malaise and weight loss plus a blank control group without any treatment. Mice were sacrificed by cervical dislocation to obtain the thymus and spleen, which were detected by hematoxylin and eosin (H&E) after 7 days. Peripheral blood was allowed to stand for at least 30 min, centrifuged at 4,000 rpm for 10 min (the same below), and then determined by using a mouse IFN-γ ELISA kit and a mouse IL-2 ELISA kit. Scientific RepoRTs | (2018) 8:12351 | DOI:10.1038/s41598-018-30956-y Tumor modeling. 4T1 tumor cells and B16F10 melanoma cells were injected into syngeneic BALB/c and C57BL/6 mice, respectively, at 1 × 10 5 /mL concentration. These mice were divided into four groups randomly (PBS, Tα1, Tα1-Fc 0.081532 µmol/kg, and Tax 0.011711 µmol/kg) with seven mice each group when the tumor volume reached 80 mm 3 . The mice were treated with PBS, Tα1, Tα1-Fc every day, or Tax every 2 days. Tumor volume and body weight were measured every day. When the PBS group's average tumor volume reached 1,000 mm 3 , the solid tumors were obtained and stored in 4% paraformaldehyde. ELISA. ELISA is widely used for quantitating antibodies(Ab) or antigens(Ag) by utilizing an enzyme-linked antibody binding to a surface-attached Ag 45 . Blank wells and standard wells with five gradients and sample wells were used; each well was added 50 µL sample. The plate was incubated at 37 °C (the same below) for 30 min then incubated with 50 µL of HRP-labeled goat anti-mice antibody, except the blank well after washing the plate. TMB-A and TMB-B were added for coloring following the end solution after 10 min. The absorbance of each well was read on a spectrophotometer using 450 nm as the primary wavelength. Histochemistry and immunohistochemistry. Cell pathological changes, such as tissue necrosis, were detected by using histochemistry (H&E) staining 46 . Immunohistochemical (IHC) staining of CD molecular was used to evaluate tumor-infiltrating T cells in the tissues. Briefly, the fixed tissue was subjected to dehydrated, transparent, process-dip wax, embedding, sectioning, sticky sheet, and bake sheet to obtain tumor tissue sections with about 4 µm thickness. Sections were incubated with H 2 O 2 (3%) for 20 min, then with the primary antibodies (rat anti-mouse CD4 or CD8 or CD86 monoclonal antibody) for 12 h, and the secondary antibody (HRP-tagged rabbit anti-rat IgG) for 1 h successively. Next, tissue slices were treated with 3,3-diaminobenzidine (DAB) or 3-amino-9-ethylcarbazole (ACE) and re-stained by hematoxylin. At last, slices were covered by coverslips and stored at room temperature. Statistical analysis. All data were presented as mean ± SD. The statistical significance of all results was evaluated by using one-way ANOVA followed by post hoc Tukey HSD test using R Software Version 3.3.1.; *p < 0.05; **p < 0.01. Data availability. The datasets generated and analyzed during the current study are available from the corresponding author on reasonable request.
5,881.4
2018-08-17T00:00:00.000
[ "Biology" ]
Improvement in Mechanical Properties and Heat Resistance of PLLA-b-PEG-b-PLLA by Melt Blending with PDLA-b-PEG-b-PDLA for Potential Use as High-Performance Bioplastics Ecofriendly poly(L-lactide)-b-poly(ethylene glycol)-b-poly(L-lactide) (PLLA-b-PEG-b-PLLA) are flexible bioplastics. In this work, the blending of poly(D-lactide)-b-poly(ethylene glycol)-b-poly(D-lactide) (PDLA-b-PEG-b-PDLA) with various blend ratios for stereocomplex formation has been proved to be an effective method for improving the mechanical properties and heat resistance of PLLA-b-PEG-b-PLLA films. The PLLA-b-PEG-b-PLLA/PDLA-b-PEG-b-PLDA blend films were prepared by melt blending followed with compression molding. The stereocomplexation of PLLA and PDLA end-blocks were characterized by differential scanning calorimetry and X-ray diffraction (XRD). The content of stereocomplex crystallites of blend films increased with the PDLA-b-PEG-b-PDLA ratio. From XRD, the blend films exhibited only stereocomplex crystallites.The stress and strain at break of blend films obtained from tensile tests were enhanced bymelt blendingwith the PDLA-b-PEG-b-PDLA.Theheat resistance of blend films determined from testing of dimensional stability to heat and dynamic mechanical analysis were improved with the PDLAb-PEG-b-PDLA ratio. The sterecomplex PLLA-b-PEG-b-PLLA/PDL-b-PEG-b-PDLA films prepared by melt processing could be used as flexible and good heat-resistance packaging bioplastics. Introduction In the past few decades, biodegradable bioplastics have been widely developed for use instead of non-biodegradable petroleum-based plastics due to plastic-waste pollution and the implementation of low-carbon environmental protection.Poly(L-lactic acid) or poly(L-lactide) (PLLA) is an important bioplastic that has attracted wide attention due to its nontoxicity, biocompatibility, biodegradability, biorenewability, and good processability [1][2][3].PLLA has uses in many fields such as biomedical, food packaging, and agriculture [4][5][6] but its use in some applications is limited by its low flexibility and poor heat-resistance [7,8]. Stereocomplex polylactides (scPLA) can be formed by blending between PLLA and poly(D-lactide) (PDLA) that had stronger interactions in the stereocomplex crystallites than the homocrystallites of PLLA and PDLA [9].This induces higher melting-temperatures (approximately 210-240 ∘ C) and faster crystallization speed than the PLLA thereby enhancing the mechanical properties, heatresistance, and hydrolysis-resistance of scPLA [10,11].The high heat-resistant scPLA is appropriate for some applications such as hot-fill packaging, heat-treatment packaging, and microwave applications.However, the glass transition temperature (T g ) of scPLAs was still similar to the PLLA (approximately 60 ∘ C).The brittle character of scPLA is still limiting in some applications. All the high-molecular-weight stereocomplex PLLA/PDLA-PEG-PDLA and PLLA-PEG-PLLA/PDLA-PEG-PDLA were prepared by solution blending [14][15][16][17][18][19][20]27].However the fabrication of scPLA by melt processing is very interesting because of its possible use in industrial-scale applications.The stereocomplexation, mechanical properties, and heat resistance of melt-processed scPLAs need to be better understood for use in practical applications.Therefore in this work, the PLLA-PEG-PLLA/PDLA-PEG-PDLA blend films were prepared by melt blending before compression molding to investigate the influence of blend ratio on their stereocomplexation, mechanical properties, and heat resistance. . .Synthesis and Characterization of PLLA-PEG-PLLA and PDLA-PEG-PDLA.The PLLA-PEG-PLLA and PDLA-PEG-PDLA were synthesized by ring-opening polymerization in bulk at 165 ∘ C under a nitrogen atmosphere for 6 h using 0.075 mol% Sn(Oct) 2 as a catalyst.PEG was used as an initiator.Feed molecular-weights of both the PLLA-PEG-PLLA and the PDLA-PEG-PDLA were calculated based on lactide/PEG feed ratio of 5/1 (w/w) and molecular weight of PEG (20,000 g/mol) that were approximately 120,000 g/mol.The obtained copolymers were granulated before drying in a vacuum oven at 110 ∘ C for 3 h to remove any unreacted lactide. The compressed PLLA-PEG-PLLA and blend films were prepared using an Auto CH Carver laboratory press at 240 ∘ C without any compression force for 1.0 min and with a 5.0 ton compression force for 1.0 min before cooling to room temperature.The film thicknesses were in ranges 0.20-0.25 mm.The obtained films were kept at room temperature for 24 h before characterization. . .Characterization of PLLA-PEG-PLLA/PDLA-PEG-PDLA Blends.The thermal transition behaviours of the blends were investigated using a Perkin-Elmer Pyris Diamond DSC under a nitrogen flow.For DSC, samples of 3-5 mg in weight were held at 250 ∘ C for 3 min to remove thermal history.Then, the samples were quenched to 0 ∘ C according to the DSC instrument's own default cooling mode before heating from 0 to 250 ∘ C at 10 ∘ C/min.For cooling DSC thermograms, the sample was held at 250 ∘ C for 3 min to remove thermal history before cooling to 0 ∘ C at a rate of 10 ∘ C/min. The degrees of crystallinity from DSC ( c,DSC ) of homocrystallites (hc- c,DSC ) and stereocomplex crystallites (sc- c,DSC ) were determined from DSC heating scan using (1) and (2), respectively.Percentage of stereocomplexation (SC) was calculated from (3). a [] (specific optical rotation) determined by polarimetry using chloroform as the solvent at 25 ∘ C with a wavelength of 589 nm [28]. b M n (number-averaged molecular weight) and dispersity index (Đ) measured by GPC using tetrahydrofuran as the eluent at 40 ∘ C. where hc-ûH m and sc-ûH m were melting enthalpies of homo-and stereocomplex crystallites, respectively.ûH cc was cold-crystallization enthalpy.Melting enthalpies for hc- c,DSC and sc- c,DSC = 100% were 93 and 142 J/g, respectively [31].The weight fraction of the PLA end-blocks ( PLA ) was calculated from mole ratio of lactide:ethylene oxide obtained from 1 H-NMR and weight of each repeating unit [12] that was 0.83 for both the PLLA-PEG-PLLA and the PDLA-PEG-PDLA. The crystalline structures of compressed films were measured using a Bruker D8 Advance wide-angle X-ray diffractometer (XRD) at 25 ∘ C with CuK radiation at 40 kV and 40 mA.For XRD, a scan speed of 3 ∘ /min was used to determine the crystalline structures.The degrees of crystallinity from XRD ( c,XRD ) for homocrystallites (hc- c,XRD ) and stereocomplex crystallites (sc- c,XRD ) of the scPLA films were calculated by using the following, respectively: where S hc , S sc , and S a were the diffraction peak-areas of homo-and stereocomplex crystallites as well as amorphous hump region, respectively.The tensile properties of compressed films were determined using a Lloyds LRX+ universal mechanical tester at 25 ∘ C and 65% relative humidity.The films (100 × 10 mm) were tested with a gauge length of 50 mm and a crosshead speed of 50 mm/min according to ASTM D882.The tensile properties were averaged from at least five experiments for each sample. The dimensional stability to heat of compressed scPLA films was tested in an air oven at 80 ∘ C for 30 sec under a 200 g load.Initial length of film samples was 20.0 mm.The dimensional stability was calculated by [32] Dimensional stability (%) = [ initial length (mm) final length (mm) ] × 100 (6) The thermomechanical properties of compressed films measuring 5 × 20 × 0.2 mm in size were investigated with a TA Instrument Q800 dynamic mechanical analyzer (DMA) in a multifrequency strain mode.For DMA analysis, the film samples were heated from 30 to 150 ∘ C at the rate of 2 ∘ C/min.The scan amplitude was set to be 10 m and the scanning frequency was 1 Hz. Results and Discussion . .Stereocomplexation.The stereocomplexation between PLLA and PDLA end-blocks of the blends was investigated from DSC heating thermograms as shown in Figure 1.The DSC results are summarized in Table 2.The T g of the blends were in range 27-29 ∘ C. The flexible PEG middle-blocks acted as plasticizers to decrease the T g of the PLA end-blocks [12,13].The stereocomplexation of PLLA and PDLA endblocks did not affect glass-transition behaviors of the blends. The PLLA-PEG-PLLA had only a homocrystalline melting peak at 169 ∘ C, while the blends had both melting peaks of homocrystallites (hc-T m ) and stereocomplex crystallites (sc-T m ) in the ranges 161-168 ∘ C and 216-218 ∘ C, respectively.That there were no peaks of cold crystallization suggested that crystallization completed during the quenching process in the DSC method.As shown in Table 2, a large decrease in hc- c,DSC and considerable increase in sc- c,DSC steadily with increasing PDLA-PEG-PDLA ratio from 0 to 50 wt% were observed.The %SC increased with the PDLA-PEG-PDLA ratio.The higher PDLA-PEG-PDLA ratio gave larger PDLA fraction for stereocomplex formation with the PLLA endblocks of PLLA-PEG-PLLA. Figure 2 shows DSC cooling thermograms of the PLLA-PEG-PLLA and blends, after being melted at 250 ∘ C. The PLLA-PEG-PLLA had crystallizing temperature (T c ) at 105 ∘ C with an enthapy of crystallization (ûH c ) of 27.4 J/g.The flexible PEG middle-blocks enhanced plasticizing effect for homo-crystallization of PLLA end-blocks during DSC cooling scan.The 90/10 blend exhibited T c at 84 ∘ C that is lower than the pure PLLA-PEG-PLLA.However, the T c and ûH c values of the blends significantly increased with increasing of the PDLA-PEG-PDLA ratio.This suggests the crystallization of the blends was accelerated during the DSC cooling scan by increasing the PDLA-PEG-PDLA ratio.The results could be explained by the crystallization of stereocomplex crystallites of PLA being faster than that of the homocrystallites [33]. The crystalline structures of the PLLA-PEG-PLLA and blend films were determined from XRD patterns as presented in Figure 3.The PLLA-PEG-PLLA film exhibited a diffraction peak at 17 ∘ attributed to homocrystalline structure of polylactide matrix [12], while all the blend films showed weak diffraction-peaks at 12 ∘ , 21 ∘ , and 24 ∘ ascribed to stereocomplex-crystalline structure [27,34].The hc- c,XRD of PLLA-PEG-PLLA film and sc- c,XRD of blend films calculated from ( 4) and ( 5), respectively, are summarized in Table 3.The sc- c,XRD of blend films increased with the PDLA-PEG-PDLA ratio.The hc- c,XRD and sc- c,XRD values in Table 3 were lower than the hc- c,DSC and sc- c,DSC values in Table 2.This may be related to the easier mobility of the copolymer chains, having occurred during the quenching process, without compression forces in the DSC method to enhance more crystallisation of PLA end-blocks. In addition, some disagreements among the quantitative results of the crystallinity by different measurement methods are frequently encountered. . .Tensile Properties. Figure 4 shows selected tensile curves of PLLA-PEG-PLLA and blend films.All the films had a yield point indicating the films were flexible except the 50/50 blend film.The flexible PEG middle-blocks enhanced the plasticizing effect of the polylactide end-blocks [12,13].The stress at yield of films increased with the PDLA-PEG-PDLA ratio.The more interactions between PLLA-PEG-PLLA and PDLA-PEG-PDLA in an amorphous phases of the 50/50 blend film might be suppressed the yield effect [27].The averaged tensile properties including stress and strain at break as well as Young's modulus are clearly compared in Figure 5.The blend films showed higher stress and strain at break than the PLLA-PEG-PLLA film.The stress and strain at break of the blend films increased as the PDLA-PEG-PDLA ratio increased.The results suggested that stereocomplexation between PLLA-PEG-PLLA and PDLA-PEG-PDLA of the blend films improved their tensile properties.The stereocomplex crystallites of PLLA/PDLA end-blocks had better tensile strength than the homocrystallites of PLLA and PDLA end-blocks due to stronger intermolecular forces in the stereocomplex crystallites [9].The stereocomplex crystallites also acted as physical crosslinkers of PLLA-PEG-PLLA and PDLA-PEG-PDLA chains in film matrix to increased extensibility of the blend films [35,36].In addition, the tensile stress at break of the 50/50 melt-blend film (21 MPa) in this work was lower than the 50/50 solution-blend film (∼40 MPa) [27].This may be due to thermal degradation and chain scission during the melt blending.Initial Young's modulus of PLLA-PEG-PLLA and blend films were in ranges 567-640 MPa.That it did not significantly change with the PDLA-PEG-PDLA ratio indicated that the stiffness of the PLLA-PEG-PLLA and blend films was similar.This may be due to the T g of these films being similar (27-29 ∘ C) and their degrees of crystallinity from XRD being low. . .Heat Resistance.The dimensional stability to heat of film samples was determined at 80 ∘ C for 30 sec under 200 g load to study heat resistance of the films.Figure 6 illustrates PLLA-PEG-PLLA and blend films before and after testing.The PLLA-PEG-PLLA film showed the longest film-extension after test [Figure 6(a)].The film extension decreased when the PDLA-PEG-PDLA was blended and the PDLA-PEG-PDLA ratio was increased.The heat resistance of the films was compared from the %dimensional stability to heat calculated from (6) that is shown in Figure 7.The %dimensional stability to heat of the films was directly related to the heat resistance that steadily increased with the PDLA-PEG-PDLA ratio.The results suggested that the stereocomplexation of PLLA-PEG-PLLA/PDLA-PEG-PDLA blends improved the heat resistance of the blend films. The heat resistance of PLLA and scPLA has been widely investigated by DMA analysis from storage modulus as a function of temperature [37,38].The storage modulus of lowcrystallinity PLLA dramatically dropped as the temperature passing the T g region before increasing again due to cold crystallization of PLLA during DMA heating scan.This indicated that the low-crystallinity PLLA had poor heatresistance [39].Meanwhile good heat-resistance was obtained when the PLLA had a high degree of crystallinity that Advances in Polymer Technology maintained stiffness of PLLA as it passed through the T g region [8]. Figure 8 shows storage modulus of film samples from DMA analysis.The storage modulus of PLLA-PEG-PLLA and blend films dramatically dropped with increasing temperature in the range 30-60 ∘ rubber-like character and low crystallinity of all the films.This indicates that these films were low heat-resistance.All the films were then extended for the test of dimensional stability to heat.The storage modulus increased again in the range 90-130 ∘ C due to cold crystallization of PLA end-blocks.This suggests that the crystallisation of compressed films did not complete.This may be due to the compression forces reducing the chain mobility for crystallisation of PLA end-blocks during the film cooling.However, the cold-crystallization regions of the blend films from DMA analysis were detected at higher temperature than the PLLA-PEG-PLLA film and shifted to higher temperatures as the PDLA-PEG-PDLA ratio increased.This could be explained by the chain mobility of copolymers during cold crystallization in DMA heating scan being restricted by the stronger intermolecular interactions between PLLA and PDLA end-blocks. In addition, the PLLA-PEG-PLLA film exhibited the largest rising up of storage modulus during cold crystallization (see black line in Figure 8).This curve type indicates that poor heat-resistance of the PLLA-PEG-PLLA film because film stiffness to heat was the lowest [39].The increases of storage modulus during cold crystallization of the blend films were lower than the PLLA-PEG-PLLA film and steadily decreased as the PDLA-PEG-PDLA ratio increased.The DMA results suggested that the interactions between PLLA and PDLA end-blocks in the amorphous phases of the blend films enhanced its stiffness and heat resistance which supports the results of %dimensional stability to heat in Figure 7.The stronger interactions of PLLA and PDLA endblocks in the amorphous phases of the blend-film matrix could reduce the film extension during test of dimensional stability to heat. Conclusions In this work, stereocomplex PLLA-PEG-PLLA/PDLA-PEG-PDLA blend films were prepared by melt blending before compression molding.The blends showed both homo-and stereocomplex crystallites from DSC analysis.The hc- c,DSC decreased and sc- c,DSC increased as the PDLA-PEG-PDLA blend ratio increased.The stereocomplexation also enhanced crystallization of PLA matrix.The higher PDLA-PEG-PDLA ratio of the blends induced faster crystallization upon the cooling scan.The XRD results supported the conclusion that the content of stereocomplex crystallites of blend films increased with the PDLA-PEG-PDLA ratio.The mechanical Advances in Polymer Technology properties of the blend films were better than the PLLA-PEG-PLLA film and increased with the PDLA-PEG-PDLA ratio.The stereocomplexation between PLLA and PDLA endblocks improved both stress and strain at break of the blend films.The values of dimensional stability to heat of blend films suggested improvement in their heat resistance that increased with the PDLA-PEG-PDLA ratio.The storage modulus of blend films during cold crystallization determined from DMA indicated that the stronger interactions between PLLA and PDLA end-blocks in the amorphous phases improved the can be concluded that the PDLA-PEG-PDLA blending can improve mechanical properties and heat resistance of PLLA-PEG-PLLA films for potential use as highperformance bioplastic products. c glass transition temperature (T g ) and melting temperature (T m ) measured by DSC (samples were melted at 200 ∘ C for 3 min and cooled to 0 ∘ C before scan from 0 to 200 ∘ C at 10 ∘ C/min under N 2 flow).
3,680.6
2019-02-24T00:00:00.000
[ "Materials Science" ]
Utilisation of JSI TRIGA Pulse Experiments for Testing of Nuclear Instrumentation and Validation of Transient Models A vital phase in the development of nuclear instrumentation detectors and associated electronic data acquisition systems is experimental testing and qualification in a well-characterized and representative radiation field in a reference irradiation facility. The 250 kW Jožef Stefan Institute (JSI) TRIGA Mark II research reactor is a very well characterized reactor in terms of the knowledge of the neutron and gamma fields, a product of the work performed at the JSI over the last decade, mostly in collaboration with the Instrumentation, Sensors and Dosimetry Laboratory at CEA, Cadarache [1]-[5]. The neutron flux level in the JSI TRIGA reactor in steady state mode is on the order of 1E13 n/cms, which is too low for do MTR relevant testing. However in pulse mode operation the JSI TRIGA reactor can achieve 1 GW for approximately 510 ms thus achieving MTR relevant testing conditions for a short time. Moreover pulse operation can be used for experimental validation of reactor transient modelling. I. INTRODUCTION A vital phase in the development of nuclear instrumentation detectors and associated electronic data acquisition systems is experimental testing and qualification in a well-characterized and representative radiation field in a reference irradiation facility. The 250 kW Jožef Stefan Institute (JSI) TRIGA Mark II research reactor is a very well characterized reactor in terms of the knowledge of the neutron and gamma fields, a product of the work performed at the JSI over the last decade, mostly in collaboration with the Instrumentation, Sensors and Dosimetry Laboratory at CEA, Cadarache [1]- [5]. The neutron flux level in the JSI TRIGA reactor in steady state mode is on the order of 1E13 n/cm 2 s, which is too low for do MTR relevant testing. However in pulse mode operation the JSI TRIGA reactor can achieve 1 GW for approximately 5-10 ms thus achieving MTR relevant testing conditions for a short time. Moreover pulse operation can be used for experimental validation of reactor transient modelling. In order to support pulse experimental campaigns data from all pulse experiments was collected and is publicly available at http:// trigapulse.ijs.si/. In addition a comparison of measured pulse physical parameters (maximal power, total released energy and full width at half maximum) was made with theoretical predictions from the Fuchs-Hansen and the Nordheim-Fuchs models. The FH model was used for evaluation of experimental uncertainties. It was shown that experimental data follows the theoretical models, but in some cases there is a large deviation, which can be attributed to large experimental uncertainties and uncertainties of the theoretical models. The validated models will be used to support future experimental campaigns, where nuclear instrumentation and data acquisition systems for fast transient measurements will be tested. II. TRIGA MARK II The TRIGA research reactor at JSI is a 250 kW TRIGA Mark II reactor. It is a light water pool-type reactor cooled by natural convection. The reactor core is placed at the bottom of a 6.25 m high open aluminum tank, measuring 2 m in diameter (see Figure 1) and is filled with demineralized water. The core shape is cylindrical and there are 91 locations in the core, which can be occupied by fuel rods, neutron source, irradiation channels and control rods (see Figure 2). The fuel elements are cylindrical shape with a homogeneous mixture of uranium and zirconium hydride coated in stainless steel. In the TRIGA reactor are four control rods, three control rods with fuel extensions and transient control rod with void instead fuel extension. In the control rods the absorber is boron carbide (B 4 C)). Elements in the core are arranged in six concentric rings with 1, 6, 12, 18, 24 and 30 available locations. Each location corresponds to a hole in the aluminum upper and lower grid plates of the reactor, which confine the reactor core. The core is surrounded by a graphite reflector enclosed in aluminum casing. The pulsed experiment is performed using a pulse rod system where the inserted transient control rod is ejected from the reactor core by pneumatic system. [6]- [7] III. PULSE EXPERIMENT After a quick ejection of the transient control rod from the reactor at pulse experiment, the reactor becomes prompt supercritical in a short time, a few 10 ms, and the power begins to increase exponentially. The change in power and consequently, the fuel temperature affects the decrease of the reactivity due to the prompt negative temperature reactivity coefficient of the fuel, which makes the reactor establish a new equilibrium state quickly and efficiently. Due to the decrease of the reactivity the chain reaction is slowed down or interrupted, resulting in a decrease in power. The peak power of the pulse reaches some MW and the total released pulse energy is relatively small (typically a few MJ) due to the short pulse time. In the TRIGA reactor, the temperature reactivity coefficient of fuel is the strongest and most important feedback effect of the reactor state on reactivity. The [7] temperature reactivity coefficient of fuel α g is defined as: where ∆ρ is the change in the reactivity of the reactor and ΔT g is the change in the average fuel temperature in the entire reactor core. The reason for an effective and prompt temperature feedback mechanism is in the special fuel composition, which is a homogeneous mixture of 20 % enriched uranium and zirconium hydride (ZrH ratio 1.6). Since hydrogen in the zirconium hydride serves as moderator, most moderation takes place in the fuel element itself and only a small part in the water surrounding the fuel elements. Consequently, any change in power and therefore also the change in fuel temperature immediately reflects on the moderator in the fuel element. Therefore, both the fuel and the moderator immediately affect back the reactivity of the core. Since the change in fuel temperature affects reactivity in different ways, the negative temperature reactivity coefficient of fuel is the sum of several contributions. The Doppler effect in the fuel and the shift of the thermal part of the neutron spectrum to higher energies contribute the most to the prompt negative temperature reactivity coefficient of the fuel in the TRIGA reactor. The basis of the Doppler's phenomenon is broadening of resonances (see Figure 3) for the capture of the neutron in the uranium 238 U, which increases the absorption of neutrons and thus reduces reactivity. in uranium 238 depending on energy at three different temperatures. [8] Another important contribution is the shift of the thermal part of the neutron spectrum to higher energies or spectrum hardening. Since the peak of the neutron spectrum (see Figure 4) is moved to higher energies, where the microscopic cross section for fission on the uranium U 235 is smaller, the neutron flux is reduced, followed by a decrease in reactivity. Lethargy is defined as = ( 0 ) and lethargy neutron spectrum is neutron spectrum multiplied with energy. IV. THEORETICAL MODEL Theoretical model for the description of pulses is the Fuchs-Hansen adiabatic model [10], derived from the equations of the point reactor kinetics on the basis of four assumptions. The first assumption is that the system is adiabatic or that the fuel does not exchange heat with the environment and consequently all the energy released is used to heat the fuel. Another assumption is that all delayed neutrons can be ignored. Both of these assumptions are good, because the pulse time is short and there is no significant heat transfer during this time and also the time is short compared to the time when the delayed neutrons are released. The third assumption is that the power of the reactor before the pulse was low or equal zero. This assumption is valid, as the reactor is initially subcritical or at low power and the source of the neutrons in the reactor is not strong. The last assumption is instant reactivity change, which is valid due to rapid withdrawal of the transient control rod from the reactor. If we take into account the first two assumptions in the equations of point kinetics, one equation of point kinetics is obtained: where P is power, ρ is inserted reactivity, β effective delayed neutron fraction (TRIGA: β = 1$ = 0.007 [6]), Λ is average generation time or time between the birth of a neutron and death with fission. Average generation time can be calculated as: = / , where l is average lifetime of prompt neutrons or time between the birth of a neutron and death with absorption or leakage and k is the multiplication factor. In derivation, it is also important to define the prompt reactivity ρ ′ : From the first assumption, it follows that the reactivity during the pulse decreases proportionally to the released energy or the resulting fuel temperature: where γ is effective temperature reactivity coefficient of fuel and E(t) total energy released during the pulse: By solving the Fuchs-Hansen adiabatic model described above, the time dependence of power P(t) and released energy E(t) can be determined: Considering the third assumption in equations (6) and (7) the maximum power P max can be predicted: and total energy released during pulse : The total energy released does not depend on the average generation time and is therefore the same for fast and thermal reactors. Unlike the total energy released, the maximum power is inversely proportional to the average generation time and is therefore higher in fast reactors than in thermal reactors, although the total energy released is comparable. The average generation time in the thermal reactor is Λ t = 10 −4 − 10 −3 s and in the fast reactor is Λ f = 10 −8 − 10 −7 s. In the case of an effective temperature reactivity coefficient of fuel the inverse proportionality of both the maximum power and the total energy released can be seen, which is desirable from a safety point of view, because this part minimizes the effect of an increase in reactivity. In addition to the maximum power and the total energy released, there is another limit value of the pulse experiment, full width at half maximum (FWHM). Full width at half maximum is defined as: where t 1 and t 2 represents times, when pulse reaches half of the maximum power: Full width at half maximum is determined as: In addition to the Fuchs-Hansen model, there is also the Nordheim-Fuchs model [11], which on the basis of the same assumptions with a slightly different execution procedure, leads to the same limit values. V. COMPARISON OF THEORETICAL MODELS AND EXPERIMENTAL VALUES The data from pulse experiments that have been carried out on the TRIGA Mark II reactor at the Jožef Stefan Institute has been collected and are publicly available at http://trigapulse.ijs.si/. The purpose was to analyse all the pulses that have been performed so far and to make a comparison between the theoretical models (Fuchs-Hansen model and Nordheim-Fuchs model) and performed pulse experiments. Limit values, maximum power, total energy released and full width at half maximum, are compared for theoretical models and experiments. In Figure 5 uncertainty of inserted reactivity (ρ) has been estimated at 5 % [13] and relative uncertainty of effective temperature reactivity coefficient of fuel has been estimated at 50 % [6]. Where is five or more experimental data at a certain reactivity a statistical analysis is made. Statistical analysis indicates maximal (max) and minimal (min) values, average, median and deviation 1σ. It can be seen that experimental data are mostly within estimated uncertainty of theoretical prediction. In the case of experimental data, there is a considerable uncertainty in determining the inserted reactivity and between the various pulses the core composition changes (number and position of fuel elements), which affects the maximum power between the pulse and consequently total released energy and full width at half maximum. The experimental values of maximum power and total energy released can be divided into two parts because of different number of fuel elements in the core. A comparison between the theoretical models and the measurements was also made for the total energy released. The total energy released, in the case of experimental measurements, is determined by an integral of the power curve. It is important to set the proper boundaries for integration to capture only the response of prompt neutrons and omit the delayed neutrons as foreseen in the theoretical models. The boundaries were set in three different ways (see Figure 6). The first method is 1000 pts, where the power curve is integrated 1000 measurements before and after the maximum power value. The second method is the 1 %, where the integration of the power curve is performed from 1 % of maximum power to 1 % of maximum power. The third method is the tangent, where the result is an integral of the power curve between the points determined by the tangents of the power graph on 1/10 of the maximum power value. In Figure 7, the results of the experimental data of the total energy released ( 1000pts green, 1% blue and tangent red) and the theoretical models (black line) of the total released energy flow in dependence of the inserted reactivity are shown. In case of five or more experimental data at a certain reactivity a statistical analysis is made and with grey area uncertainty of theoretic model is shown. As in the case of maximum power, it can be observed that the experimental values follow the theoretical predictions, but at low inserted reactivities the experimental data are outside of determined uncertainties of theoretical model. It can also be observed that all three modes of determination of total energy released are in good agreement among themselves, especially in the case of large inserted reactivity. The comparison of theoretical models and experiments for full width at half maximum is presented in Figure 8. As in case of maximum power the black line shows the theoretical predictions and the blue dots show performed pulse experiments. The grey area shows uncertainty of the theoretical prediction. Where is five or more experimental data at a certain reactivity a statistical analysis is made. It can be noted that the match between theoretical predictions and experimental data is rather poor. Many experimental data are outside the estimated uncertainty of the theoretical models. The worst are data at low inserted reactivities, which was expected. The reason for bad matching is the high uncertainty in determining the full width of half maximum from the experimental data, which is very high especially at low inserted reactivities. CONCLUSION Purpose of utilisation was to collect all data from pulse experiments in order to support pulse experimental campaigns where pulse operation can be used for experimental validation of reactor transient modelling. Also, to achieve MTR relevant testing conditions at the JSI TRIGA Mark II reactor a pulse mode operation is used where can be achieved 1GW for approximately 5 ms -10 ms. The pulse mode is a well-characterized and representative radiation field which will be used to support future experimental campaigns, where nuclear instrumentation and data acquisition systems for fast transient measurements will be tested. All experimental data of pulse experiments are publicly available at http://trigapulse.ijs.si/. A comparison of measured pulse physical parameters (maximal power, total released energy and full width at half maximum) was made with theoretical predictions from the Fuchs-Hansen and the Nordheim-Fuchs models. It was shown that the experimental data of maximum power and total energy released are mostly within the estimated uncertainty of the theoretical model. They differ most at low inserted reactivities, where background noise is disturbing at determination of experimental data, and theoretical models can only be applied at higher reactivities. At full width at half maximum the deviations are greater due to the greater uncertainty in determining values from the experimental data.
3,683.4
2020-01-01T00:00:00.000
[ "Physics", "Engineering" ]
Hegelian Dialectics: Implications for Violence and Peace in Nigeria “Life is a mystery” is a saying that most people are familiar with. The myste-riousness of life consists, among other things, in its unpredictability despite efforts and developments in science and technology. Indeed paradoxes and contradictions abound in every facet of life to an extent that some would be inclined to subscribe to nihilism, fatalism or catastrophism as the primordial reality. This is implied in Heraclitian “flux”, the dialectic of Hegel, the “no-thingness” of Sartre, etc. The logical implication of these positions would be the meaninglessness of life. But is life really meaningless? Can something positive come out of the negative events in the world like violence, conflict and war and so on? Could the myriads of violence in Nigeria, for instance, caused by Boko Haram in the North East, in the East by Indigenous People of Biafra (IPOS) and in South-South by Niger Delta Avengers be beneficial in the long run? In other words, could something positive be ensued from these obnoxious situations? These are the concern of this paper which examines the Hegelian dialectics that apparently accepts the co-existence of paradoxes and contradictions as complementary realities resulting in a synthesis. However, this paper believes that the synthesis will only lead to a better state of affairs if premised on affective humanism as an ontology. Prolegomena There is a famous saying among Nigerians that "life is full of ups and down" 1 . The reality of the saying is so glaring that it has become an axiom. Daily events 1 in the world from time till today exhibit this complementary nature of existence to such an extent that some are inclined to think of complementarity as an ontology. This was the position of Heraclitus who saw "change" as the arche-type of reality and Hegel who believed that "contradictions, tensions, paradoxes, oppositions and reversals are at the heart of all thought and even realty itself" (Lawhead, 2002). The Africans too believe in complementarity of opposites. Given this scenario, there is a tendency in nihilistic mind to see disaster in the world order. Some would be fast to toe or emphasize either side of the divide as fundamental and absolute. Hegel, however, following his dialectic insists that "truth is multifaceted and each set of oppositions points to higher, more encompassing viewpoints" (Lawhead, 2002). It is within this understanding that violence that is usually regarded as an unpleasant phenomenon; a state of affair with devastating consequences to the political, social, religious and economic life of a people and a nation, which is undeniably part and parcel of the world order is examined in this paper. World Health Organization defines violence as behaviour or treatment in which physical force is exerted for the purpose of causing damage (Wikipedia, Online). The noun form means behaviour involving physical force intended to hurt, damage, or kill someone or something with brutality, strength of emotion or of destructive natural force portraying its intensity and severity (Oxford Dictionary, Online). The verb, "violent" means to be incompatible or at variance; it is a clash, understood as an act of aggression, with the property of being wild or turbulent, a turbulent state resulting in injuries and destruction; intentional use of physical force or power, to threaten or cause damage. Conflict, on the other hand, though not of ferocious nature like violence, is a serious disagreement or argument, typically protracted dispute, quarrel, squabble; to come to collision or disagreement: be contradictory; active disagreement between people with opposing opinions. It differs from violence in magnitude, intensity and effect. Sometimes, protracted conflict escalates into violence (Wikipedia, Online). Heraclitus who is accredited with being the first to articulate this thought on violence as the basis of reality used the word eris which has many connotations to describe conflict and strife. It could stand for war, quarrel, debate, rivalry, contention, jealousy, etc. So, this paper employs violence and conflict inter-changeably as an unpalatable situation of turbulence, internally and externally, which is often expected to results in catastrophe. Based on this understanding of violence and given the spate of the phenomenon of violence in Nigeria, ranging from religious, ethnic to political, with magnitude far beyond any expectation and anticipation, and taking varied dimensions and configurations from domestic to public strikes and agitations and now terrorism, many would be fast to paint a gloomy picture for Nigeria's future. However, based on Hegel's dialectic that subscribes to complementarity of opposites, this paper examines violence and strife in its ontological perspective in order to find out its intrinsic worth as a complement of peace. Again, given the fact that there is no nation without some form of rancor, the paper then inquires into the necessity of such a phenomenon in the development of a nation like Nigeria with myriads of other contending problems like underdevelopment, poverty, corruption, ineffective leadership, unpatriotic citizens, poor educational system, and mal-administration and so on. In other words, this paper seeks to find out if violence of any kind and magnitude can be of any benefit to any country at all talk less of a developing nation like Nigeria? This is the kernel of this paper. The Nigerian Situation To say that Nigeria, as a nation, has had its fair share of violence and conflict is to state the obvious (C.f. Olawale (2018), ynaija.com; Akinnaso (2018), punching.com: O'Grady (2018), The Washington Post). There is so much violence and conflict in Nigeria beginning form the home to the public sphere, from the secular to the preserved sphere of religion, with unprecedented frequency, velocity and tenacity that many are afraid of their consequences to Nigeria's fragile nationhood. Of recent, the activities of Boko Haram in the North East, Herdsmen and farmers clashes in the Middle Belt, the Independent People of Biafra (IPOS) in the East and the Niger Delta Avengers in South-South have posed serious threat to Nigeria's nationhood and should usually be a cause for concern to any serious minded person. The issue of violence, however, is not limited to Nigeria as the world in general has had a fair share of it. Even countries in Europe and United States of America that could be said to have all it takes to contend it have become its prey in recent times not to mention the Middle East that is almost becoming a theatre of war. However, the violence in each country and continent has its peculiarities which could account for it causa generi. In Nigeria, the phenomenon of violence could be attributed to so many factors. Aside from the perennial religious and political violence, there is also violence which is engendered by economic factors. For instance, cases of militia groups in the Niger Delta region of the country that emerged as a result of perceived neglect, injustice and destruction of their environment and the nomadic Fulani herdsmen over grazing areas for their cattle, which have resulted in many deaths, kidnapping, rape and various forms of violence in Plateau state, Benue state, Enugu state, etc. Besides, violence in the public sphere there are conflicts and violence in the homes. Cases of chieftaincy tussle, communal land clashes, family in-fighting abound everywhere. The country is then largely mired in a cesspool of conflicts that threaten her very survival as a nation. Every day the country makes news headlines on various forms of violence, reflecting the Hobbesian state of nature. Conflict is almost becoming the order of the day to such an extent that people are becoming familiar with sights of violence. They are inundated with news of violence on daily basis such that they may regard the problem as insurmountable. But is the situation unredeemable? Certainly, not so as every human problem can be solved given the right approach and appreciation of the problem. It is on this note that a consideration of Hegelian dialectics of violence is contem-Open Journal of Philosophy plated as another perspective to the issue of violence in Nigeria and the world over. Violence as an Ontological Phenomenon Heraclitus' position seems to suggest violence as constitutive of reality when he conceived everything as being in a state of flux-change. To him, what constitutes the essence of the world, that sets it in constant motion, is fire; which is also a principle of war (Polemos) and strife (Eris). He saw conflict of opposites as the very essence of reality and the condition for change. Explaining Heraclitus' position, Stumpf puts it thus: The conflict of opposites is not a calamity but the permanent condition of all things. If we could visualize the whole process of change, we should know, says Heraclitus that "war is common and justice is strife and that all things happen by strife and necessity". From this perspective, he (Heraclitus) says, "what is in opposition is in concert, and from what differs comes the most beautiful harmony". Even death is no longer a calamity, for, "after death things await men which they do not expect or imagine" (Stumpf, 1971). For Heraclitus, then, what constitutes the world is conflict, which should not be necessarily viewed and taken as negative, but as the very condition that engender change and progress; even when it may momentarily seem to be quite the opposite. This underscores the fact that the world is the unity of diversity which Heraclitus originally rendered in Greek as sumpheromenon diapheromenon. This position is supported by William Abraham (1985) who submits that though the world seems to be orderly or an ordered whole, it is also a theatre of conflicts and inherent disorderliness that defies human logic and permutation. In his words: The intended orderliness of nature is not to be grasped as a sheer datum-this much has been apprehend in the felt intractability and existential distress begotten of it, intractability which may occasion unexpected famine through blight, and physical destruction, through sudden flood, earthquake, or spontaneous conflagration. Even in the world of science, it is a generally accepted thesis that the world itself came to be through a violent process: the Big Bang Theory. The theory is explained by Encarta thus: The Big Bang theory proposes that the universe was once extremely compact, dense, and hot. Some original event, a cosmic explosion called the big bang, occurred about 13.7 billion years ago, and the universe has since been expanding and cooling (Encarta Online (2013), www.world-mysteries.com). Consequently, science sees the world as a product of violence. This original conflict that gave birth to the universe, which include the totality of existence, makes conflict an ontological phenomenon that undergirds all of reality thus therefore, all too apparent that violence is an ontological phenomenon that is hardwired even into nature itself and the whole gamut of existence. Even the human nature itself is embedded in the factuality of conflict and is largely defined by it, as the psychologists would make us believe. For instance, the foremost psychologist, Karl Jung, saw the human mind as the interplay of many opposing impulses that regulate it and ensure equilibrium. According to him, when a certain impulse or libido reaches its extreme, it passes into its opposite (Fordham, 1966). This means that the human psyche is so made that it is a theatre of constant conflict between the various impulses. The ontological reality of conflict made Thomas Hobbes, in the Leviathan, to paint human nature in its original state (the state of nature) as being characterized by violence. In Hobbes words: During the time men lived without a common power to keep them all in awe, they are in that condition which is called war; and such a war, as is of every man, against every man (cited by Alexander Moseley, 2015). In fact, contrary to Hobbes position, even the presence of the Leviathan that was to keep the people in awe could not take away conflicts from the human society. This is because conflict is inherent in the human reality as a condition of its being, as such. This fact is well supported by Fyodor Dostoyevsky (1957) in Brothers Karamazov: In every man, of course, a beast lies hidden-the beast of rage, the beast of lustful heat at the screams of the tortured victim, the beast of lawlessness let off chain, the beast of diseases that follow on vice, gout, kidney disease, and so on (ii, v.4). Given the above supposition, violence and conflict, then could readily be regarded as an ontological phenomenon that is predicated on the supposition of change as a primordial reality. Such position, of course has a lot of metaphysical The dialectics of violence and peace is often a celebrated theme in religion. For example, in Christian and Jewish theology, it is opined that the world was created from the nebulous and amorphous state of uncertainty as could be gleaned from the Genesis account of creation that; "… the earth was a formless void, there was darkness over the deep" (Gen. 1: 1-2). It was also after the violence and tumult of the floods during the time of Noah that a New World emerged that was characterized by peace and abundance (Gen: 6 & 9). Even the salvation of man, according to Christian theology was wrought through persecution, suffering, death and eventual resurrection of Jesus Christ. This dialectic seems to be anchored on the fact of duality of existence which is typified in instances like day and night, hot and cold, man and woman, young and old, being and non-being, good and evil and so on. Dialectically, these apparently opposite realities are believed to be dissoluble into each other as complements and so could be regarded as mutually inclusive. Corollary, violence could be regarded as mutually inclusive of peace, without which peace would not be achieved and valued. This bent of thinking must have informed Franz Fanon's (1967) observation that "… human reality in-itself-for-itself can be achieved through conflict and through the risk that conflict implies". What this translates to is that violence and peace are deducible from each other. The very idea of violence brings to mind the idea of peace and vice versa. Peace is seen as the absence of violence, and violence as the absence of peace. Thus, even in the conceptual analysis of the two terms, they are implicated in each other. Each makes sense in the context of the absence of the other, as Being and Nothing in the Hegelian dialectics. The absence of one makes the presence of the other inevitable, which shows a binding and ontological-cum-dialectical relationship. Though David Hume would have picked fault with this position on the ground that there is no constant conjunction between the two to prove a necessary connection that necessarily imply their co-existence. However, the traditional criticisms against Hume's ultra skepticism hold here, mutatis mutandis. That the necessary connection cannot be empirically given does not imply that there is no logical and ontological relationship between violence and peace. This position would naturally evoke some fundamental question regarding the nature of violence. Is violence intrinsically bad? Can something good be derived from something that is bad? Can violence be justified generally or in some circumstances? Hegel's Dialectics Hegel is one of the contemporary philosophers with a rationalistic bent of mind. He was greatly influenced, among others, by the ancient Greek philosophers like Socrates, Aristotle and the rest. Among the thoughts of the ancient that so fasci- where diametrically opposed points of view may be advanced at first instance. As the argument ensues, both parties may gradually come to understand each other's position and both may come to agree or to reject their own earlier partial view for a broader view. In this case, the original opposition has been reconciled in higher synthesis. It was this Greek's usage of dialectic that led Hegel to the discovery of the relationship between Logic, Nature and Mind. Hegel then conceived this relationship as resulting in a cyclic procedure characterized as thesis (the positive side), the antithesis (the Negative side) and the synthesis (the product). For Hegel, the synthesis itself would become another thesis, and thereby repeating the triadic dialectical process all over, till its culmination in the Absolute. The dialectics of Hegel, which serves as the methodological tool of his philosophical system, is embedded in his logic. Hegel's idea of logic is a deviation from the traditional understanding of the discipline as a tool and method of discovering the course of realities. That means, Hegel understood logic virtually as metaphysics (Stumpf, 1971). May be, he was influenced by Descartes' insistent on methodology for clearer understanding of reality. However, while Descartes was more concerned about the relations of ideas to each other, Hegel's methodology was anchored on the inner logic of the totality of reality itself. Hegel's bent of mind is explained by Stumpf (1971) as based on the fact that: Since Hegel had identified the rational with the actual, he concluded that logic and logical connections must be discovered in the actual and not in some empty ratiocination. To this end, Hegel's logic is not necessarily concerned with the relations of abstract categories but concerned with the categories and facts of experience as well as those of abstract terms. So, he applied it to the totality of existence. In a nutshell, the crux of Hegelian logic entails that nothing can be deduced from nothing which, however, was a repeat of the well-known principle in formal logic that preceded Hegel. In formal logic, it is taken that the conclusion must always be contained in the premise. There is no category that can be deduced from another category that is not implicit in that category. Thus, one cannot deduce B from A if B was not already in some way contained in A. In formal logic, the species are always deduced from the genus. In this way, for instance, the specie, man, is deduced from the genus animal. This is what Spinoza meant earlier as all determinations are negations. It is the determination of the species that negates them from the genus and other species of the self same genus. The genus, as a pure concept, is indeterminate and undifferentiated. So it is the instance, of say, a goat that differentiates it from man though both come from the same genus, animal. The issue, however, is that if both are different by virtue of their specific de- what differentiate(s) the species of a particular genus could be deduced from the genus itself. This problem is better appreciated when one fully grasps the meaning of a genus or universal. A genus is a general idea from which particular things share in common. But, as a general idea, the genus contains no specific characteristics-were it to be so, it will be a certain thing whose characteristics it possesses and no longer a universal pure concept. What differentiates the species of a particular genus is the differentia. The novelty of Hegel's position lies in his attempt to show that the differentia could be deduced from the genus. According to W. Stace (1955): The solution of this problem constitutes the central principle of Hegelian philosophy, the famous dialectical process. It rests upon the discovery that it is not true, as hitherto supposed that a universal absolutely excludes the differentia. Hegel found that a concept may contain its own opposite hidden away within itself, and that this opposite may be extricated or deduced from it and made to do the work of a differentia, thus converting genus into species. Herein, lies the crux of the Hegelian dialectics that opposites are deducible from each other in order to form a third category, becoming. Hegel conceived the dialectical process as a triad, thesis, antithesis and synthesis, though he never employed these terms in his system. He rather used the terms Abstract, Negation and Concrete, to stand for thesis, antithesis and synthesis respectively. To demonstrate his dialectics, Hegel started with the most abstract and general idea that the mind can possibly form-Being, as the thesis. What informed his choice of Being as the first thesis was his belief that the mind or thought moves from the most abstract idea to specific and concrete ones. For him, therefore, the most general idea the mind can possibly conceive is Being, which by itself is logically prior to any determination and differentiation. Logic therefore has to begin with the indeterminate concept of pure Being, which is the general featurelessness that precedes all definite character and is the very first of all (Stumpf, 1971). The onus on Hegel (1999) Hegel strongly believed that a universal term contains some other category that could be deduced from it. To demonstrate this, he analyzed the concept of pure Being. According to him, the idea of pure Being is abstract and indeterminate; it contains no particular qualities, since if it did it will no longer be pure Being. But the very fact that it contains nothing and lacks determination makes the idea of pure Being negative. Since it contains nothing, it is also akin to not-being. There is virtually no difference between pure Being and Nothing; both could be deduced from each other. This really sounds absurd, a fact Hegel himself acknowledged when he said that "the idea that Being and Nothing are the same is so paradoxical to the imagination or understanding that it is perhaps taken for a joke" (Stumpf, 1971). Although, he saw the incredibility of the above proposition, Hegel went ahead to maintain that the idea of pure Being contains Nothing, therefore Nothing itself could be deduced from pure Being. Put symbolically, the universal A, as a category, contains a not A (-A). The above is the main outline of Hegel's logic and dialectics that the opposites, rather than being mutually exclusive, are implicated in each other. Thus, the concept of Being, as a thesis, necessarily contains its opposite-Nothing, as its antithesis and are deducible from each other. This opposition, according to Hegel, creates a third category, Becoming. This third category implies the union or synthesis of Pure Being and Nothing. David James (2007) throws more light on this thus: Hegel's attempt to preserve the distinction between Being and Nothing is to be found in his concept of Becoming., which, when analyzed, turns out to contain within itself both Being and Nothing, either as coming-to-be, in which case thought moves from Nothing to Being, or as ceasing-to-be, in which case thought moves from Being to Nothing. Becoming, then, implies the synthesis of Being and its antithesis, Nothing. Hegel saw Being and Nothing as distinct moments, but which nevertheless are mutually involved in each other. Becoming only is, in so far as they are distinguished. The third is the other than they; they subsist only in the other, which is equivalent to saying that they are not self-subsistent (Hegel, 1999). Interestingly, Hegel did not stop at the level of Pure Being in his dialectics, but applied it to the whole of existence as the principle that undergirds everything. For him, everything in the world exhibits this triadic process of the dialectics till its culmination in the Absolute. Hegel on Violence Hegel's thought on violence is a corollary of his dialectic. Obviously, the role violence plays in bringing something new into the universe was not originated by Hegel in his dialectics, but had been mooted years before Hegel was born by Heraclitus in the ancient period. Heraclitus whose thoughts, no doubt, greatly influenced Hegel, saw the dialectical importance of violence, strife or war as a precursor to something new and positive when he stated in Fragment 56 that F. Etim, M. K.-A. Akpabio Open Journal of Philosophy "war is the Father of all and King of all, and some he shows as gods, others as men; some he makes slaves, others free" (Hegel, 1880). As noted earlier, the original Greek word Heraclitus used was Eris, which has many connotations. It could stand for war, quarrel, debate, rivalry, contention, jealousy etc. For Heraclitus, strife or conflict is the order of the day; nothing is certain and permanent. Everything is omnia flux-(everything changes). Since for him, no one can step into the same river twice. Therefore, for Heraclitus, violence or conflict has a purgative role and brings something new into the universe. This explains why he conceived "Fire" as the original matter, the urstuff or arche, of the universe. Explaining Heraclitus position, Nicholas of Cusa maintains that finite things are locked in perpetual conflict with each other as contraries, and are always at pains to erase each other from existence. This conflict goes on eternally and would have brought chaos into the world, safe for the fact of reason (Ratio), which mediates these conflicts among opposing things, thus ensuring balance in existence. This he refers to as synthesis of opposites-coincidentia oppositorum (Oburota, 2000). Such contraries include essence and existence, non-being and being, greatness and smallness, more and less, one and many, etc. The bottom line of the submission of Nicholas is that there is a dialectical process in nature that ensures balance among opposing things, and that negativity experienced as conflicts, is an ontological condition of being and existence. It is a necessary condition that is a given, a sort of facticity. This is concretized, according to Thomas Hobbes in the uncompromising violence in the political state that saw the emergence of the Leviathan as a stabilizing factor in maintain order. For Hobbes, therefore, without the threat of violence and its very actuality, the society will relapse to the pristine state of nature with its frightening prospects (Ryan, 1984). Though, Hobbes position should be viewed within the backdrop of his conception of the human nature as rebellious in se hence, the importance of the Leviathan to negate such tendencies Hegel, in line with his dialectics, saw contradiction as a necessary condition for progress and development. This contradiction could be instantiated in wars, conflicts, violence, strife or disagreements. Hegel seemed more concerned with the ontology of violence and not its moral implication which would have placed violence on a negative perspective. This explains his exclusion of people like Napoleon Bonaparte, who he thought embodied the World Spirit, from being bound by the laws and morality of their times (Hegel, 1956). This, in a way, must have also informed Hegel justification of the atrocities carried out by Napoleon, where millions were killed, as the necessary dialectical stage that ushered in civilization in Europe. The bud disappears in the bursting-forth of the blossom, and one might say the former is refuted by the latter; similarly, when the fruit appears, the blossom is shown up in its turn as a false manifestation of the plant, and the fruit now emerges as the truth of it instead. These forms are not just distinguished from another; they also supplant one another as mutually incompatible. Yet, at the same time their fluid nature makes them moments of an organic unity in which they not only do conflict, but in which each is as necessary as the other; and this necessity alone constitutes the life of the whole. What this metaphor of Hegel shows is that everything contains its own negation which is constitutive of its ontological essence and ensures its organic wholeness. Therefore, negation as a contradiction, is always a positive process in Hegelian philosophy. Erol (2010) gives a concise summary of Hegel's dialectics of violence as follows: In a macro sense, conflict can be understood as a negation of the status quo and peace efforts as the negation of the conflict. The resulting product would be "the synthesis" or the new status quo which also has a further negation, and this dynamic would keep on unfolding. To Hegel's (1956) mind, the ultimate end of conflict is the achievement of freedom which is the essence of the spirit (Philosophy of History, 17). Conflicts arise because people's worth are not recognized, thus impinging on their freedom. Thus freedom, according to him, consists of recognition. To this end, Hegel did not see war as evil, but as the very means by which the Absolute employs to achieve its aim. In the words of Wiser (1983): … for Hegel, wars are not evil. Rather they are the necessary means by which the World-Spirit evolves, and thus they ensure the dominance of reason in history. For Hegel, the victors are always right because in His wisdom God so constructed history that the powerful and the righteous are always one and the same. This position made Hegel disagreed with Kant on the "Theory of Perpetual Peace" among nations through the League of Nations that Kant had suggested as the arbiter between nations in times of conflicts. In Hegel's contention, nations are in the state of nature in relations to each other and will always protect their interests. He was skeptical about Kant's suggestion on the role of League of Nations as could be gleaned from his submission that: There is no praetor to judge between states; at best there may be an arbitrator or a mediator, and even he exercises his functions contingently also, i.e. in dependence on the particular wills of the disputants (Hegel, 1952). Since matters between states cannot be objectively settled without recourse to contingent subjective interests, Hegel opined that the best way to settle disagreement between nations is through war, especially if their particular interests (Hegel, 1952). Hegel's submission on violence as not intrinsically evil was later supported by other philosophers like Martin Heidegger (1956), Jean Paul Sartre (1992), Amilcar Cabral (1967), Frantz Fanon (1967, Oronto Douglas and Doife Ola (2003). Heidegger (1956), for instance, in his discussion on Polemos (the original violence or chaos) believes violence plays a necessary role in ushering something positively new. He defined the Polemos as "the conflict that prevailed prior to everything divine and human…; that first projects, and develops what had hitherto been unheard of, unsaid and unthought" (62). For Heidegger (1956), conflict is one of the necessary conditions of Being, Sein, which also becomes implicated in the being-of-things, Seinde, including the Dasein (the human reality). He believed that Polemos (conflict) gave birth to everything and continues to shape the course of existential events. It is very pellucid that Heidegger did not see violence as something negative, but as an ontological necessity that drives civilization on. This must have informed his dalliance with Nazism, which has unfortunately, remains a slur on his personality, in spite of, his insightful and lofty philosophical postulations. Jean Paul Sartre also eulogized violence when he wrote in the preface to Frantz Fanon's book, The Wretched of the Earth (1982) that "this irrepressible violence… is man recreating himself". Sartre supported many liberation struggles like the Algerian War of independence from France, as ontological necessity that will bring about the full realization of the self in freedom. For him, violence is liberating and remains the only alternative left for the oppressed to cease the historical initiative and exist as authentic beings. On the part of Amilcar Cabral (1967) (the revolutionist that masterminded the independence of Guinea Bissau from Portugal), informed by the dialectical interpretation of historical process as becoming, saw armed conflict as being necessary for advancement. For him, "we are not defending the armed fight… it is a violence against even our own people. But it is not our invention… it is not our cool decision; it is the requirement of history" (79). For Cabral, therefore, the historical reality itself requires violence as part of the dialectical process in order to arrive at the required synthesis of peace and freedom. Cabral held that violence is the only choice left for the colonized to regain their existential space and live as real beings. One can fully appreciate the imports of his position when one considers the dehumanizing state and treatment the Africans and others who lived through the indignity of colonization. In such a scenario, confrontation to negate the status quo was inevitable, since force has no moral sanction; whatever is taken by force, can by that same means (force), be legitimately regained (Rousseau, 1983). Counter-violence is antithetical to the original violence and will resolve into a synthesis of peace and freedom. According to Frantz Fanon (1982), in The Wretched of the Earth, violence is the necessary condition for liberation from oppression. Indeed, it is "violence alone, violence committed by the people, violence organized and educated by its leaders that makes it possible for the masses to understand social truths and They view them as "civil society organizations that fight in concert with other progressive forces for liberation of all the oppressed people of the land". For them, the ethnic militancy is "a contribution to democracy and diversity". Implication for Violence and Peace in Nigeria In reality, conflict arises as a reaction to something. It could be injustice, oppression, neglect, corruption, underdevelopment, etc., though it could also be due to the very nature of reality itself, which is ontologically violence prone. At any rate, violence in many instances, as in liberation struggles, is aimed at righting some perceived wrongs. This was observed by Jean Paul Sartre (1992) when he observed that: "all violence presents itself as the recuperation of a right…and, reciprocally, every right inexorably contains within itself the embryo of violence" (Notebook of Ethics). Thus, conflicts arise when people perceive that they are deprived of what they may consider as their rights; some of which may rightly be due them, while in some instances such perception may be far from reality. Even when such rights are given, the inexorable law of dialectics will make such rights to negate some other people's right, whether real or misplaced, hence the continuation of the never ending cycle of violence and peace. What the above illustrates is that conflict, though it may seem undesirable and odious, is nevertheless necessary to bring peace, justice, attention and redress in some instances. As noted earlier, Hegel thought on violence would definitely provoke some salient and fundamental questions regarding the nature of violence. Is violence intrinsically bad? If so, can something good be derived from it? In other words can goodness be deducible from evil? If it is not intrinsically bad, can it be justified in any circumstance? Can a wrong means justify a good end? Must evil be necessarily present for peace to ensure? Given the dialectical process, will perpetual peace ever be attained? It appears that these issues are both ontological and ethical. Hegel however does not seem to have been bothered about the moral implication of his thought on violence but was concerned about the ontological possibility of deducing peace from violence based on his dialectics. According to Stace (1955) Hegel found that a concept may contain its own opposite hidden away within itself, and that this opposite may be extricated or deduced from it and made to do the work of a differentia, thus converting genus into species. There- Though, they may not be conscious of the imports of their acts as being dialectically significant, they are nevertheless the unconscious tools of communal becoming and historicity. The emergence of Boko Haram and allied terrorist groups in the north has opened the eyes of government to the dangers of having an army of illiterate and jobless youths roaming the streets, especially in the North, where Islamic fundamentalists are on the prowl to recruit them for their blood-letting causes. Even the politicians who maintained the status quo and used the almajiris as cannon fodders to rig elections and cause violence to aid their causes, have now come to realize that they are creating some Frankensteins that may end up annihilating them. This informed the last administration of President Goodluck Jonathan to embark on establishing many almajiris schools in the North to forestall the recruitment of illiterate youths into terrorist ranks. Without the reality of Boko Haram, such schools would not have come to be, though it is the sensible thing that should have been done ab initio. It appears that Nigeria, particularly government responds and lives up to their responsibilities only when they are reminded through agitations, and most especially violent ones. The leaders tend to do the right things only when cornered by crises. To this effect, some scholars have suggested that conflicts should be deliberately created in order to bring about progress in our country. In the words of Adedotun Philips (1981): If crises are deliberately created in particular problem areas which are carefully selected in order to maximize the short-term social inconvenience and ensure that such inconvenience affects all social groups (high and low), then government can reasonably be expected to act as swiftly, as it has done in the past (in emergency situations) to improve the efficiency and effectiveness of the services concerned. In essence, it is a strategy which attempts to temporarily bring down one sector in order to bring up and improve another sector. Though the above suggestion by Adedotun may seem whimsical, it nevertheless captures the importance conflicts and crises play in our national evolution. Without violence, one could be taken for granted, exploited, neglected, marginalized and trampled upon by those in power. You need to accentuate your being by drawing attention to yourself through the language government here understands very well-violence, since it is also sustained by it. As an Ibibio proverb would have it; "ifod isitaha ayin eka asong mbang", which means that "a witch can never bewitch or kill the son of a mother who can make noise". Without conflicts, the much desired peace and its appurtenances will remain a mirage. Open Journal of Philosophy What this boils down to is that violence by its very nature is not intrinsically evil, as would be suggested by some moralists in a dialectical process. As a necessary dialectics of peace, it is an ontological phenomenon that makes peace meaningful and appreciated. As was argued before, there are contexts which would make violence necessary and adjudged moral. The Just War Theory (Jus Bellum Iustum), for instance, which is even accepted by some moralists, cuts the ground off anyone that views violence as intrinsically evil. Therefore, violence should not be dismissed in one fell swoop as being necessarily evil and intrinsically morally abhorrent. The context, out of which, violence emerges should not be detached in its evaluation. The context determines the moral imports of violence and not its actuality, in se, per se. What the above implies is that, while violence should be condemned and avoided in general, there are instances that make violence the inevitable option to cease the initiative of history, survival, self-worth to achieving peace and other desirable outcomes. Violence for Violence's sake can never be justified and is not what is intended here, whatsoever. Also, the violence hinted here is not limited or exhausted by armed conflict as earlier explained: it also stands for rebellion, agitation, disagreements, demonstrations, and strikes etc., which all seek to negate the status quo. Conclusion The ontological implication of dialectics inevitably results in eternal cyclic order, implying that the achieved peace (synthesis) eventually evolves into another thesis (in this case, violence), the question then is whether the desired perpetual peace will ever be attained. Can anything good come out violence? Hegel was not unaware of this implication as he proposed a situation where love will at a certain point in the dialectical process reconcile all conflicts. For Hegel, love is the reconciliation of opposites; the conflict situation is reconciled eternally in love. Love in his words is: ... a distinguishing of the two, who nevertheless are absolutely not distinguished for each other. The consciousness or feeling of the identity of the two-to be outside of myself and in the other this is love. I have my self-consciousness not in myself but in the other. I am satisfied and have peace with myself only in this other and I AM only because I have peace with myself; if I did not have it then I would be a contradiction that falls to pieces. This other, because it likewise exists outside itself, has its self-consciousness only in me; and both the other and I are only this consciousness of being-outside-ourselves and if our identity; we are only this intuition, feeling, and knowledge of our unity. This is love, and without knowing that love is both a distinguishing and the sublation of this distinction, one speaks emptily of it (Leifheit (2012), https://www.marxists.org/reference/archive/hegel/works/love/). Commenting on this submission by Hegel, Peter J. Leithart (Online, 2003) F. Etim, M. K.-A. Akpabio Open Journal of Philosophy writes: Among the many fascinating things here is the implication that love is the prerequisite for a unified identity. Hegel says that to be a unified self, one must be at peace; but this peace comes only through the "distinguishing and sublation of distinction" that is love for another person; MY peace, my unity as a being, depends on love, the other's love for me and my love for another. This is suggestive, though Hegel doesn't exactly explain WHY this peace comes only "in the other." Perhaps it has something to do with his insight that part of my identity is my difference from the other; to say I am Peter is, at least, to say I am not Paul or George. This means that my identity and unified self-conception includes a moment of difference. But how that this difference not turn into endless "deference"? Through mutual (almost perichoretic) love. It is only within this ambient of mutual love and concerns that actions of individuals, cooperating bodies and government particularly in Nigeria will not deliberately generate conflict by institutionalized neglect and injustice. After all, caring for one another through communal concern is an ontological reality in Africa. Going beyond oneself to the consideration of others will result in what this paper regards as affective humanism. So if affective humanism then becomes a "meta-motivation" (using Maslow's phrase) of action then conflict will totally be eliminated. Affective humanism as a philosophy of action is premised on the African ontology of communalism and harmonious monism. African ontology hinges on a dualistic universe that is integrative and complementary. Both worlds are peopled with existents, visible and invisible respectively; hierarchically placed but complementary in existentiality. Every existent then lives in complementarity, harmoniously and integratively. Within this placement, every man needs another to live meaningfully and authentically. Since the world is the theatre of action for the realization of true self, then the other persons and things are equally important. Affective humanism does not consist in loving others as one would love himself but involves what both Maslow and Frankl call "auto-transcendence of self-transcendence". Self-transcendence denotes: The fact that being human always points, and is directed, to something or someone, other than oneself-be it a meaning to fulfill or another to encounter. The more one forgets himself-by giving himself to a cause to serve or another person to love-the more human he is and the more he actualizes himself (Frankl, Quotes, Online). As noted in my article, "Metaphysics of Terrorism" (Etim, 2018), no one can love except he transcends himself. This informs Frankl description of love as: .... the only way to grasp another human being in the innermost core of his personality. No one can become fully aware of the very essence of another human being unless he loves him. By his love, he is enabled to see the es-Open Journal of Philosophy sential traits and features in the beloved person; and even more, he sees that which is potential in him, which is not yet actualized but yet ought to be actualized (Quotes, Online). That is why Hegel and Frankl see love as the ultimate goal to which man can aspire and that the salvation of man is through love. It is within this consideration of love that violence as a precursor of peace will become meaningful and not in perpetual violence and conflict.
10,197.6
2018-09-17T00:00:00.000
[ "Philosophy" ]
Stability and dynamics of optically levitated dielectric disks in a Gaussian standing wave beyond the harmonic approximation Forces and torques exerted on dielectric disks trapped in a Gaussian standing wave are analyzed theoretically for disks of radius $2~\mu\text{m}$ with index of refraction $n=1.45$ and $n=2.0$ as well as disks of radius 200 nm with $n=1.45$. Calculations of the forces and torques were conducted both analytically and numerically using a discrete-dipole approximation method. Besides harmonic terms, third order ro-translational coupling terms in the potential energy can be significant and a necessary consideration when describing the dynamics of disks outside of the Rayleigh limit. The coupling terms are a result of the finite extension of the disk coupling to both the Gaussian and standing wave geometry of the beam. The resulting dynamics of the degrees of freedom most affected by the coupling terms exhibit several sidebands as evidenced in the power spectral densities. Simulations show that for Gaussian beam waists of $1-4~\mu\text{m}$ the disk remains stably trapped. Forces and torques exerted on dielectric disks trapped in a Gaussian standing wave are analyzed theoretically for disks of radius 2 µm with index of refraction n = 1.45 and n = 2.0 as well as disks of radius 200 nm with n = 1.45. Calculations of the forces and torques were conducted both analytically and numerically using a discrete-dipole approximation method. Besides harmonic terms, third order ro-translational coupling terms in the potential energy can be significant and a necessary consideration when describing the dynamics of disks outside of the Rayleigh limit. The coupling terms are a result of the finite extension of the disk coupling to both the Gaussian and standing wave geometry of the beam. The resulting dynamics of the degrees of freedom most affected by the coupling terms exhibit several sidebands as evidenced in the power spectral densities. Simulations show that for Gaussian beam waists of 1 − 4 µm the disk remains stably trapped. I. INTRODUCTION The choice of particle used in levitated optomechanics is an important factor that depends on the goal of application. The most widely used particle in the field is a silica sphere with radius small compared to the wavelength. The dynamics of spheres trapped in cavities and focused laser beams are well understood and used for cooling to the motional ground state as well as force sensing [1][2][3][4]. This is owing to the simple harmonic translational and free rotational dynamics making it an ideal system to handle for both experimentalists and theorists. Particles with decreased particle symmetry allow rotational degrees of freedom to enter into the potential energy. A nanorod has large differences in moments of inertia and polarizability which allows rotations to be described as decoupled librations about the laser polarization axis. The motion of nanodumbells or generally anisotropic materials requires rigid-body dynamics since these particles have moments of inertia of similar magnitude [5,6]. Increasing the size of the particle relative to the wavelength of the laser further complicates the motion for any particle shape [7]. Still, terms necessary to describe nanorods and nanodumbbells have been investigated and the motion is also well understood [8][9][10][11]. Dielectric disks also have a relatively simple shape, but have not seen as much attention as other particle geometries. Several studies point to thin nanodisk scattering being more realistically described in a Rayleigh-Gans rather than a Rayleigh approximation for index of refraction n ∼ 1 [12][13][14]. This generally leads to an orientational dependent shape function in the form of a Bessel function. From studies investigating the applications of disks for various purposes, it is unclear whether there is consensus on the necessity of including the shape function or other non-harmonic terms in the dynamics [15][16][17]. There are few experimental studies involving disks, however two such studies suggest terms of higher order may be necessary for describing the motion [18,19]. In this paper it is shown that higher order terms of at least third order in the potential energy are necessary for describing the dynamics of disks outside the Rayleigh regime in a Gaussian standing wave. A disk experiences restoring forces in all three translational degrees of freedom and torques in two rotational degrees of freedom. Similar to rods and nanodumbells, the rotation about the disk's symmetry axis is unaffected by light coupling and is a constant of the motion. Focus is given to the effects due to the third order terms which provide unique rotranslational couplings that have not yet been discussed in levitated optomechanics. The coupling terms are a result of the finite extension of the disk coupling to both the Gaussian and standing wave geometry of the beam. Inclusion of the coupling terms results in dynamics with several different modes of oscillation for each degree of freedom which are evident in the power spectral density. Simulations show no evidence of instability. An analytical as well as numerical approach using a discrete-dipole approximation method is used to identify the forces and torques on disks of radius 2 µm with index of refraction n = 1.45 and n = 2.0 as well as disks of radius 200 nm with n = 1.45. The Gaussian standing wave is constructed with a wavelength λ = 850 nm and various waists w 0 = 2, 2.5, 3, 4 µm. The coupling terms presented in this paper may hinder or benefit applications for levitated disks. Disks have been proposed as potential accelerometers for gravitational wave detection [16]. The third order coupling terms may complicate determining which degree of freedom experienced a force or torque. On the other hand, it may be used as a means for indirectly detecting the motion of several degrees of freedom with a single detection scheme and therefore an efficient force/torque detector. Another common application is cooling the motion of the disk in attempt to study macroscopic quantum mechanics [20][21][22]. As energy from one degree of freedom can be transferred to another through the couplings, it may have potential for sympathetically cooling several degrees of freedom by performing a cooling method on only one of the degrees of freedom. Preliminary results show that this is indeed possible for both radii studied using parametric feedback cooling or cold damping. The coupling terms are found to scale as the square ratio of the radius to the beam waist, a 2 /w 2 0 , and may therefore have less of an impact on the dynamics for particles of smaller radii compared to the wavelength. It is also found that the influence of the coupling may be reduced by sufficiently separating each degree of freedom's harmonic frequency. This paper is organized as follows. Section II illustrates the analytical calculation of the potential energy of thin dielectric disks in a Gaussian standing wave. The potential energy is approximated to reveal a term third order in displacements and rotations. In Sec. III, the procedure for calculating forces and torques on a disk using the discrete-dipole approximation is outlined. The corresponding coefficients/frequencies are presented for various Gaussian beam waists. Lastly, Sec. IV examines the resulting dynamics due to the harmonic and coupling terms described in the previous sections. FIG. 1. Coordinate system in the lab frame (x, y, z) and the particle frame (x , y , z ). The disk's symmetry axis is aligned with the particle frame z axis. The disk's center of mass as measured from the lab frame r0 is shown in red. The location to a point on the thin disk is given by the polar coordinates (ρ , φ ) in the particle frame which are shown in purple. II. APPROXIMATE ANALYTICAL POTENTIAL ENERGY This section outlines the analytical calculation of the potential energy of a thin dielectric disk in a Gaussian standing wave in the Rayleigh-Gans approximation. In this approach the disk thickness is taken to be very thin so that the Rayleigh approximation holds along that direction [14,15]. The approximated results verify the existence and helps elucidate the origin of the terms responsible for the dynamics seen in the following sections. The disk is described with radius a, thickness T λ, index of refraction n, and susceptibilities χ = n 2 −1 and χ ⊥ = χ /n 2 corresponding to the susceptibility parallel and perpendicular to the disk symmetry axis (z axis), respectively [15]. The principal moments of inertia are I z = ma 2 /2 and I x = I y = m(3a 2 + T 2 )/12. The disk's center of mass is located at r 0 = x 0 , y 0 , z 0 and rotations are described in terms of the Euler angles (α, β, γ) in the z − y − z convention [5,23,24]. The Gaussian standing wave is formed by two counterpropagating Gaussian waves with non-zero longitudinal components so that they satisfy Maxwell's equations [25]. Each traveling wave has the symmetric waist w 0 , wavenumber k = (±x)2π/λ, and is polarized in theẑ direction. Around the focus, x = 0, each wave takes the form where x R = kw 2 0 /2 is the Rayleigh range and + (−) stands for the the right (left) traveling wave. The incident fields used for the numerical calculations in Secs. III and IV are found by propagating Eq. (1) throughout all space using the angular spectrum representation [25]. For the analytical calculations performed in this section and in Appendix B the approximated Gaussian standing wave is used, which is valid in the space |x| x R . The mechanical potential energy associated with the interaction between the light and the dielectric is where the integral is over the volume of the disk, χ 0 is the diagonal susceptibility matrix in the nanoparticle frame, and ↔ R is the rotation matrix. The rotation matrix in terms of the Euler angles explicitly can be found in Appendix A. The potential energy in the Rayleigh-Gans approximation with the incident field, Eq. (2), becomes where χ 1 = ∆χ cos 2 β + χ ⊥ , χ 2 = ∆χ sin β cos β sin α, ∆χ = χ − χ ⊥ , and a higher order term proportional to z 2 (r ) sin 2 kx(r ) was dropped. To evaluate Eq. (4) the coordinates of the disk must be projected onto each lab frame coordinate (x, y, z) and it is favorable to move to polar coordinates. First, in the limit T λ the functions in Eq. (4) are independent of the thickness leaving the functions in the integral dependent only on the disk's radial and angular coordinates (r ) = (ρ , φ ) (see Fig. 1). In terms of the center of mass and disk coordinates, x(r ), y(r ), and z(r ) in Eq. (4) are and the R ij are matrix components in the rotation matrix ↔ R (see Appendix A). Insertion of Eqs. (5) and (6) into Eq. (4) leads to analytic solutions in terms of Bessel functions. In the limit of small radius w 0 a, r 0 a w 2 0 where the zeroth order approximation to the exponentials (∼ 1) can be used, a Bessel function of the first kind is obtained as was found in Ref. [15]. However, this approximation misses the coupling of the disk to the Gaussian standing wave and a fourth order expansion in the coordinates is required to resolve it. Practical parameters in levitated optomechanics are in the range (λ, w 0 ) ∼ 1 µm and (r 0 , a) ∼ 1 − 0.1 µm. For the derivation, the limits a 2 w 2 0 , r 2 0 w 2 0 are used to expand each function in Eq. (4) to fourth order in the coordinates and terms O(a 6 /w 6 0 ) as well as O(x n 0,i π m j ), where n + m ≥ 4, π j = (α, β), are dropped which retains terms up to third order in the coordinates. Due to the symmetry of the disk, the potential energy is independent of the angle γ. Further, the disk's symmetry axis is primarily aligned along the lab framex direction and, as will be justified in the next section, rotates at angles that justify the small angle approximation α → 0 + θ z , β → π/2+θ y with θ z , θ y , small. Here θ z represents small angle rotations about the lab frame z axis while θ y is a small rotation about the lab frame y axis. The resulting potential energy is of the form Explicit expressions for the ω i may be found in Appendix B. The terms in the first row in the above potential describe simple harmonic motion for the three translational and two rotational degrees of freedom. The last term is a coupling between the translational and rotational degrees of freedom that is of third order in the coordinates. The coupling terms arise due to the finite radius of the disk and the Gaussian and standing wave geometry of the beam. An asymmetric electric field gradient across the disk produces a stronger force on the section of the disk with greater laser intensity. That section of the disk is pulled into the region of the trap with greater laser intensity more strongly than the section of the disk with less field intensity. As the radius increases and the trap becomes more confining, the greater the electric field gradient across the disk and the more influential the coupling terms are. With reference to Eq. (4), it is a result of the ro-translational coupling in the Gaussian together with the x 0 dependence in cos 2 kx(r ) describing the standing wave. The asymmetry in the (ω 1 , ω 2 ) coefficients is due to thex component of the incident electric field propor- To garner an idea of the dynamics that arise due to the coupling, consider the x 0 y 0 θ z term in Eq. (7). A disk displaced by r 0 = x 0 , y 0 , 0 in Fig. 1 experiences a torque about the −z axis due to a greater electric field intensity on the side of the disk nearest the focus. These terms are therefore a gradient force/torque as a consequence of the electric field gradient along the finite extension of the disk. III. NUMERICAL EVALUATION OF THE FORCES AND TORQUES A. System and Procedure The optical scattering problem for finite sized dielectric objects is generally difficult to solve analytically. As was done in the previous section, approximations are often required to glean insight into the dynamics. Another rigorous approach is to numerically solve for the scattered electromagnetic waves and use the resulting Maxwell stress tensor to obtain the forces and torques. This section details the results from performing the latter method by numerically implementing the discrete-dipole approximation (DDA) to calculate the scattered fields of the disk [26,27]. In the DDA, the disk is composed of N discrete spherical dipoles each with polarizibility α and the internal fields of the dielectric are solved for self-consistently to retrieve the scattered fields outside the particle. In the implementation of the DDA used for this paper, each dipole that composed the spherical dipole had a polarizibility α = 4π 0 R 3 n 2 − 1 n 2 + 2 . The method developed has been shown to be accurate to within 1% by comparing the scattered fields from a discretized sphere to the exact Mie scattering solutions [28]. The scattered fields that are generated from the DDA are then added to the incident field and inserted into the Maxwell stress tensor [29] T in order to obtain the forces and torques where The surface over which the integration is performed was taken to be a sphere centered at the disk center with radius 1.5× that of the disk. The surface integration was performed using Gaussian quadrature with increasing number of points until convergence was demonstrated. The above procedure was performed for dielectric disks located near the intensity maximum of a Gaussian standing wave. To construct the standing wave, a righttraveling wave, E R (x, y, z) is found by propagating Eq. (1) throughout all space using the angular spectrum representation with no paraxial approximation. A lefttraveling wave, E L (x, y, z) = E R (−x, −y, z), is added to the right-traveling wave to form the standing wave. The wavelength of each wave is λ = 850 nm and is fixed throughout this paper. A range of Gaussian beam waists were explored w 0 = 2, 2.5, 3, 4 µm to define the optical trap. Most of the calculations performed were for disks of radius a = 2 µm, thickness T = λ/4n to achieve maximum light coupling, and index of refraction n = 1.45 or n = 2.0. The indices of refraction correspond to materials composed of silica and silicon nitride, respectively. Unless otherwise stated, the data and discussions that follow will refer to this set of parameters. The following example outlines the steps for how a calculation is performed: the disk's symmetry axis is aligned with the axial direction (x axis), the disk is displaced a distance y 0 from the focus of the standing wave, the scattered waves are calculated using the DDA, the forces and torques are computed using Eqs. (9) and (10). The process is identical for rotations: the disk is initially situated at r 0 = 0, 0, 0 and (α = 0, β = π/2), a rotation is made α = 0 + θ z , the scattered waves are calculated using the DDA, the forces and torques are calculated. The baseline for the calculations is when the disk is placed symmetrically at the focus of the standing wave, r 0 = 0, 0, 0 , (α = 0, β = π/2) which should be a potential minimum. Indeed, a force or torque due to a displacement generally gives a value at least ten orders of magnitude greater than the baseline. B. Forces and Torques As is expected in levitated optomechanics, small displacements in one direction reveals a spring force in that same direction F i = −k i x i,0 and torque τ i = −κ i π i , π i = (α, β). The spring constants for each degree of freedom, (k i , κ i ), are determined by direct division, k i = −F i /x i,0 . At the harmonic level, no coupling of the different degrees of freedom through the potential energy were found. Being that there are 6 degrees of freedom (including γ), there are 15 different second order couplings possible in the forces and torques. Of these possibilities, only terms similar to that in Eq. (7) were found to be above the baseline. These terms were found to be significant for disks of large and small radii. For example, a displacement of the center of mass by r 0 = x 0 , 0, z 0 produces a torque about the y axis, suggesting a term in the potential energy U ∝ D 1 y 0 z 0 θ y , with D 1 a proportionality constant. A similar coupling of the same order was found U ∝ D 2 x 0 y 0 θ z , with D 2 = D 1 necessarily. The coefficients D 1 and D 2 are also determined by division, i.e. D 1 = F z /(y 0 θ x ). Interestingly, the coefficients computed in this way generally gives different values for the force in theŷ andẑ directions for the first coupling term, and for the second coupling term, with A ≈ C 1 and B ≈ C 2 . The coefficients A and B can differ from C 1 and C 2 by 2% using a waist of w 0 = 2 µm and 20% using a waist of w 0 = 4 µm. Although the discrepancy is suspected to be due to higher order terms, we are only interested in the dynamics due to this term and the average values D 1 = (A + 2C 1 )/3 and D 2 = (B + 2C 2 )/3 will be used from here on so that potential energy can be written in the form of Eq. (7). The consequences of using the average values is insignificant and will be discussed in Sec. IV. The spring and coupling constants (k i , κ i , D i ) have the same units and are most useful when written in terms of frequencies for translational harmonic motion, for rotational harmonic motion, and for the coupling terms. w0 (µm) ωx (kHz) ωy (kHz) ωz (kHz) ω θy (kHz) ω θz (kHz) ω1 (kHz) ω2 (kHz) 2 394 38 38 537 390 46 39 3 264 17 17 361 263 21 17 TABLE I. Frequencies for a silica disks of radius a = 200 nm and thickness T = λ/(40n) for two beam waists w0 = 2, 3 µm. The disk has dimensions that are reduced by a factor of ten from the a = 2 µm, T = λ/(4n) disks. A fixed total power of 100 mW is used for the calculations. The number of points used to compose the disk was N = 37488 and the thickness of the disk was 4 points. Values for the frequencies as a function of beam waist are shown in Fig. 2 for silica and Fig. 3 for silicon nitride using a fixed total laser power of 100 mW. The general trend identified from the figures is that each frequency decreases as the waist increases. This feature is not unexpected, however, for particles in the Rayleigh regime λ a, ω i ∝ 1/w 2 0 while for a = 2 µm disks the dependence is nearly linear. For both materials, the frequency in the axial direction is in the 150 − 200 kHz range while the radial degrees of freedom oscillate in the 1 − 10 kHz range. The axial frequency is most strongly affected by the standing wave which is independent of the waist. However, the radial frequencies are dominantly due to the Gaussian geometry. To leading order (see Appendix B), for fixed power the axial frequencies depend inversely on the wavelength and waist ω x ∝ 1/λw 0 while the radial frequencies depend on the waist as ω y,z ∝ 1/w 2 0 , hence the disparity between the axial and radial frequencies. Note that part of the waist dependence on each frequency is due to the dependence of the laser intensity on the waist. Each frequency therefore shares a 1/w 0 dependence from the power. For a 2 µm radius disk at T = 300 K, these frequencies correspond to translational oscillation amplitudes of x 0 ∼ 1 nm and (0, y 0 , z 0 ) ∼ 20 nm. The rotational frequencies are closer to the axial frequency and in the range 190−125 kHz. The rotational frequencies differ by 20% between the two materials at the same waist. Using the average frequency, this corresponds to angular displacements of ∼ 1 mrad. Displacements of this size justify some of the approximations made in Sec. II since r 0 w 0 and sin α ≈ θ z . Also shown in Figs. 2 and 3 are the coupling coefficients (ω 1 , ω 2 ). The coefficients being in the 50−200 kHz range are comparable to both the rotational and axial frequencies. Due to the large coupling frequencies combined with the relatively large oscillation amplitude in the radial degrees of freedom, the resulting forces/torques due to the coupling terms have an impact on the dynamics as shown in Sec. IV. Force and torque calculations were also performed for silica disks of radius a = 200 nm and thickness T = λ/(40n) for the two beam waists w 0 = 2, 3 µm. The dimensions are 10× smaller than the a = 2 µm, T = λ/(4n) disk. The resulting frequencies are shown in Table I. From the table, each frequency scales as ω i ∼ 1/w 0 except for the radial frequencies (ω y , ω z ) ∼ 1/w 2 0 . This dependence on the waist is consistent with the analytical frequencies given in Appendix B. Also from the table, each harmonic frequency is larger, and the coupling frequencies reduced, compared to its a = 2 µm and T = λ/(4n) counterpart in Fig. 2. The dependence of each frequency on the radius is also consistent with that found analytically in Appendix B. The harmonic frequencies increase as the radius decreases since the disk has greater field intensity per volume. The coupling frequencies scale as ∼ a/w 2 0 due to the electric field gradient across the disk. This dependence provides a factor of ten between the a = 200 nm and a = 2 µm coupling frequencies. C. Accuracy of the DDA FIG. 4. Frequencies obtained for a a = 2 µm silica disk using DDA for varying number of points that the disk was composed of relative to the frequency obtained using 299744 points, ωi,0. The legend describes the various frequencies for the x, y, z, θy, θz degrees of freedom as well as the ω1, ω2 coupling frequencies. The data points along the x-axis are 4680, 15804, 37488, and 299744 points. Comparing the left and rightmost data points in the figure shows that using 64 times more points changes the frequencies by less than 2%. The frequencies shown in Sec. III B were obtained through several numerical operations such as integrations and the implementation of the DDA. One of the major questions regarding convergence of these values is how many points (i.e. number of discrete dipoles), N , should be used to discretize the disk. Figure 4 shows the relative change of the various frequencies discussed in the previous subsections as a function of the number of points used to compose the disk. Here, ω i,0 is the frequency calculated using the largest number of points shown in the plot, N = 299744. The frequency calculated using N points is ω i . The change in the frequency ω i compared to ω i,0 points is then ∆ω i = ω i − ω i,0 . The plot is shown for all of the various frequencies discussed above using a a = 2 µm silica disk with a w 0 = 2 µm waist. Increasing the number of points by a factor of 64 from N = 4680 to N = 299744 changes the frequency by less than 2%. On the other hand, the time complexity of the DDA method used to calculate the scattered light from the disk scales as N ln N . IV. DYNAMICS The previous two sections have illustrated that disks levitated in Gaussian standing waves experience simple harmonic motion as well as non-harmonic forces and torques involving second order couplings. This section discusses the resulting dynamics due to these forces and torques as well as the natural torques that arise in rigid body dynamics. Thus far the focus has been on identifying terms in the potential energy. For translational motion the kinetic energy is trivial and leads to the equations of motion for small angle oscillations. As was shown in Ref. [5], for a symmetric top-like rigid body the rotational kinetic energy naturally involves coupling between the α,α, β, andβ degrees of freedom. Whether these terms are significant or not depends on the geometry. For a = 200 nm disks each non-linear coupling term is significant and must be considered. For a = 2 µm disks, the term responsible for precession about the x axis is the largest, but is still 10 −4 times smaller than the harmonic term and is therefore negligible. The equations of motion for a = 2 µm disks are then written asθ for small angle oscillations. FIG. 5. Example trajectories of the x and two rotational degrees of freedom as well as the power spectral density of the axial motion for a a = 2 µm silica disk in a w0 = 3 µm waist trap. The influence of the second order coupling term produces several amplitude modulations at different frequencies, but the disk remains stable. The frequencies of modulation in the x degree of freedom can be seen in the power spectral density. Note that the rotational amplitudes remain in the ∼ mrad range, justifying the small angle approximation. Figure 5 shows sample trajectories of the θ y , θ z , and x motions of a a = 2 µm silica disk in a w 0 = 3 µm waist Gaussian standing wave by simulating Eqs. (20) to (24) at T = 300 K. The influence of the second order coupling terms are seen to be significant for the three degrees of freedom with each trajectory containing modulations at various frequencies. Without the couplings the oscillations would be at the same amplitude for all times. In a gaseous environment these modulations might be mistaken for noise in an experiment. The bottom-rightmost plot in Fig. 5 shows the power spectral density (PSD) of the x motion. The harmonic frequency ω x /2π = 163 kHz is the largest and rightmost peak in the PSD. The other frequencies in the figure are the harmonic frequency plus the sums and differences of the various y, z, θ y , and θ z frequencies. Whereas sidebands due to coupling typically appear symmetrically on each side of the harmonic frequency, the frequency structure seen in Fig. 5 is such that all significant modes have smaller frequency than the harmonic frequency. This is not a general feature of the coupling term and depends on the degree of freedom that is being observed and the various levels of degeneracy. The influence of the coupling term on each degree of freedom has two factors: the size of the coefficients ω 1 and ω 2 , and the level of degeneracy of the coupled degrees of freedom. First, the ω 1 and ω 2 coupling coefficients are relatively large ∼ 100 kHz. Second, strong coupling is achieved when the frequencies are nearly degenerate. Because ω x , ω θy , and ω θz are close in frequency the coupling term produces a larger effect on these degrees of freedom. Since the radial degrees of freedom oscillate 10× slower, the influence of the coupling term is significantly reduced, but not absent. The question of stability is one of the most important for applications using levitated nanodisks. Despite the seemingly complicated motion, simulations have shown no evidence that this motion is unstable. The disk remains stable in the trap after several thousand oscillations for all of the beam waists explored w 0 = 2, 2.5, 3, 4 µm. The a = 200 nm disk was found to be stable at all frequencies, even with inclusion of the nonlinear coupling terms in the rotational kinetic energy [5]. Recall from Sec. III B the differing coefficients in Eqs. (11) to (16) as produced from the DDA calculations. Simulating the equations of motion with different coefficients attached to each degree of freedom's coupling term causes no issue for stability. A common application in levitated optomechanics is cooling the motion of the levitated particle in attempt to reach the ground state, or to reach lower pressures [4]. The couplings in this paper offer a possibility of cooling one or more degrees of freedom sympathetically by actively cooling only one degree of freedom. The full dynamics of cooling using the couplings is beyond the scope of this paper, but we note some preliminary findings. Through simulations of the equations of motion Eqs. (20) to (24), results show that sympathetic cooling is indeed possible. For both radii, parametric feedback or cold damping [30,31] is an effective method for cooling multiple degrees of freedom. By inserting artificial numbers for the frequencies in the simulation, two relations were found for optimal cooling. Frequencies tailored within a few kHz of the relations ω x = ω θy ± ω z and/or ω x = ω θz ± ω y , can achieve significant sympathetic cooling to at least the mK regime. From Fig. 2, a = 2 µm silica disks are naturally in this regime. From Appendix B, each frequency depends on several parameters and has the possibility to be tuned to achieve optimal cooling experimentally. V. CONCLUSION The forces and torques exerted on dielectric disks trapped in a Gaussian standing wave were analyzed for disks of radius 2 µm with index of refraction n = 1.45 and n = 2.0 as well as disks of radius 200 nm with n = 1.45. Calculations of the forces and torques were conducted both analytically and under numerical simulation using a discrete-dipole approximation method. Similar to nanodumbbells, a nanodisk experiences restoring forces in all three translational degrees of freedom, restoring torques in two rotational degrees of freedom, and has constant spin about the symmetry axis. Due to the finite geometry of the disk, third order, rotranslational coupling terms in the potential energy are found to be a necessary consideration when describing the dynamics of disks. The coupling terms are the result of an electric field gradient across the disk and depend on the ratio of the radius to the beam waist and on the temperature. The ro-translational coupling produces several modes of oscillation in the coupled degrees of freedom which are evident in the power spectral density. While the restoring forces are dominant, the coupling terms can become sizable through strong coupling, which manifests when the coupled degrees of freedom are nearly degenerate. Despite the couplings, simulations show no evidence that the motion is unstable, which is of utmost importance for applications such as gravitational wave detection, force sensing, and ground state cooling. ACKNOWLEDGMENTS Supported by the Laboratory Directed Research and Development program at Sandia National Laboratories, a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S. Department of Energy's National Nuclear Security Administration under contract de-na0003525. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do no necessarily represent the views of the U.S. Department of Energy or the United States Government. We would like to acknowledge Alejandro Grine, Darwin Serkland, Justin Schultz, Michael Wood, Peter Schwindt, and Tongcang Li for motivation of pursuit on the topic and useful discussions.
8,058.4
2020-06-12T00:00:00.000
[ "Physics" ]
Parametric Study of Unsteady Flow and Heat Transfer of Compressible Helium–Xenon Binary Gas through a Porous Channel Subjected to a Magnetic Field A numerical analysis of unsteady fluid and heat transport of compressible Helium–Xenon binary gas through a rectangular porous channel subjected to a transverse magnetic field is herein presented. The binary gas mixture consists of Helium (He) and Xenon (Xe). In addition, the compressible gas properties are temperature-dependent. The set of governing equations are nondimensionalized via appropriate dimensionless parameters. The dimensionless equations involve a number of dimensionless groups employed for detailed parametric study. Consequently, the set of equations is discretized using a compact finite difference scheme and solved by using the 3rd-order Runge–Kutta method. The model’s computed results are compared with data from past literature, and very favorable agreement is achieved. The results show that the magnetic field, compressibility and variable fluid properties profoundly affect heat and fluid transport. Variations of density with temperature as well as pressure result in an asymmetric mass flow profile. Furthermore, the friction coefficient is greater for the upper wall than for the lower wall due to larger velocity gradients along the top wall. Introduction Magnetrohydrodynamics, known as MHD, is the combined field of fluid dynamics and electromagnetic effects. Theoretically, when electrically conducting fluids (such as noble gases and plasmas) flow through a magnetic field, electrons in both the fluid and the magnetic field induce an electric current in the direction perpendicular to the flow and magnetic field. The electric current then interacts with the magnetic field, which results in the "Lorentz force" being exerted on the fluid particles. In addition to the induced force, magnetic induction causes resistive heat, termed Joule or Ohmic heating. Magnetic fields can be utilized for a purpose of controlling flows, which are termed MHD flows. MHD applications include use as thrusters, pumps, accelerators and cross-field generators [1][2][3][4]. These MHD devices play a major role, for example, in enhancing the efficiency of jet engines [2]. In another example, plasma can be detained within the torus shape of a tokamak by magnetic force in order to control the generation of nuclear fusion power [5]. The magnetic field applied to flows in porous domains covers a wide area of application including geothermal energy, metallurgy and nuclear science [6,7]. In the solidification of alloys, a magnetic field is used to adjust the flow pattern in the porous mushy zone [8]. MHD flows of liquid metal in a capillary porous system (CPS) were investigated as a means to obtain better control of head load and surface erosion on the plasma facing components (PFCs) [5,9]. For enhancement of heat transfer, there have been a number of studies which investigated nanofluids subjected to magnetic fields [10][11][12][13][14]. Furthermore, magnetic fields have been applied to blood flows for clinical purposes. Pulsatile flows of blood that were considered electrically conducting through porous media were numerically investigated [15,16]. Thermal behavior of the flow of an electrically conducting fluid through a magnetic field over a stretching sheet embedded in a non-Darcian medium was numerically analyzed, accounting for radiation and heat generation and absorption [17]. The unsteady problem of the laminar fully-developed flow and heat transfer of an electrically-conducting and heat-generating or absorbing fluid with variable properties through a porous channel in the presence of uniform magnetic and electric fields was studied through a parametric study [18]. Later, El-Amin took the combined effects of Ohmic (or Joule) heating and viscous dissipation into consideration to investigate MHD forced convection over a nonisothermal horizontal cylinder in a fluid saturated porous medium [19]. The effects of flow, medium permeability and fractional parameters were analyzed for the flow of eclectically conducting fractional fluid through a porous channel. The exact solution was derived by applying the Caputo-Fabrizo time fractional derivative to find the exact solution. The momentum equation was solved using the joint Laplace and Fourier transform [20]. However, the study of compressibility effects on MHD porous flows has been very limited [21][22][23][24]. The influences of magnetic induction and rotation on the thermosolutal instability of a rotating flow in a porous medium was investigated [21]; the fluid was considered compressible, as its fluid density varied with temperature pressure and concentration. Additionally, each temperature pressure and concentration was a function of elevation. The thermal instability of a Rivlin-Ericksen rotating fluid with suspended particles flowing through a porous medium subjected to a magnetic effect was studied [22]; the fluid density varied with both temperature and pressure. In these works [21][22][23][24], the representative model for electrically conducting fluid was assumed with constant properties. MHD compressible liquid which flowed through a porous plate was examined [24]; it was found that a differential equation for the density could be linear if gravitational effects were neglected. The recent work on compressible MHD flow through a porous medium has been conducted numerically [25]. In this work, the computed results were nondimensionalized via post processing. Although the overall effect of magnetic-flow interaction on thermal and flow processes was interesting, the Hartman number which represented the magnetic effect was the only parameter examined. Moreover, employing the properties of air in calculations might not reflect the true situation with accuracy. The present article investigates the unsteady flow and heat transfer of a compressible Helium-Xenon binary gaseous mixture subjected to a magnetic field in a two-dimensional plane. Helium (He)-Xenon (Xe) gaseous mixture is generally adopted as a working fluid in a closed cycle MHD power generation system to avoid using an alkali-metal seed [26,27]. Additionally, the He-Xe mixture is utilized as a coolant in nuclear reactors due to its high heat transfer coefficient [28,29]. The thermal and mechanical properties of the gaseous mixture are considered temperature-dependent. Density of the mixture gas is allowed to change with temperature as well as pressure. The system of dimensionless governing equations is one of the important dimensionless parameters that effectively facilitate parametric analysis. The numerical results are validated using the previously published literature. To our best knowledge, such a study is not found in the existing literature. Figure 1 shows the physical geometry of the problem. A rectangular porous channel with the size of 0.0001 m × 0.0002 m is placed in an x-y coordinate. The binary gaseous mixture of Xe and He with equal component proportion is considered. The gas flows through an inlet section with the mass flow rate given by Equation (1): Problem Formulation and Methodology where u 0 is the peak velocity at the inflow boundary, which will be used as a reference value for the non-dimensionalization process. where 0 is the peak velocity at the inflow boundary, which will be used as a reference value for the non-dimensionalization process. An isothermal condition is imposed on the lateral walls. The top wall has a higher temperature than the bottom wall. The outflow boundary is nonreflecting [30]. The magnetic field propagates from the lower wall to the upper wall across the domain. Mathematical Model The set of governing equations is modeled to describe the non-isothermal flow of electrically conducting fluid through a porous media including mass conservation momentum and energy equations. The Darcy-Brinkman-Forchheimer equation is used to model flow through a porous media [31][32][33]. In its conservative form, it is written for twodimensional flow as where U, E, F and H are column vectors containing flux variables, as follows: The stress tensor can be expanded in terms of velocity and viscosity, as follows: An isothermal condition is imposed on the lateral walls. The top wall has a higher temperature than the bottom wall. The outflow boundary is nonreflecting [30]. The magnetic field propagates from the lower wall to the upper wall across the domain. Mathematical Model The set of governing equations is modeled to describe the non-isothermal flow of electrically conducting fluid through a porous media including mass conservation momentum and energy equations. The Darcy-Brinkman-Forchheimer equation is used to model flow through a porous media [31][32][33]. In its conservative form, it is written for two-dimensional flow as δ δt where U, E, F and H are column vectors containing flux variables, as follows: The stress tensor can be expanded in terms of velocity and viscosity, as follows: The dynamic viscosity and thermal conductivity of the binary gas mixture data are taken from [34] and plotted as a function of temperature in Figure 2 The dynamic viscosity and thermal conductivity of the binary gas mixture data are taken from [34] and plotted as a function of temperature in Figure 2. Subsequently, the temperature dependencies of viscosity (µ) and thermal conductivity (k) are modeled using a linear regression approach and given by In a fluid-saturated porous domain, the effective thermal conductivity keff is considered as where porosity and the Thermal Conductivity of Solid Matrix are taken to be constant at 0.75 and 0.6, respectively. Considering the compressibility effect, quantities including velocity, temperature, density and pressure, must be satisfied with respect to the total energy and ideal gas law relationships: Subsequently, the temperature dependencies of viscosity (µ) and thermal conductivity (k) are modeled using a linear regression approach and given by µ(T) = 6.72696 × 10 −8 T + 5.47652 × 10 −6 (10) In a fluid-saturated porous domain, the effective thermal conductivity k eff is considered as where porosity φ and the Thermal Conductivity of Solid Matrix k s are taken to be constant at 0.75 and 0.6, respectively. Considering the compressibility effect, quantities including velocity, temperature, density and pressure, must be satisfied with respect to the total energy and ideal gas law relationships: Nondimensionalization Process The governing equations are rendered non-dimensional using the adopted nondimensional variables. The existing dimensional variables, which are peak inlet velocity, channel width, initial fluid temperature and density, are used as the reference variables. Definitions of the non-dimensional variables are shown in Table 1. Table 1. Definition of the non-dimensional variables. Variable Definition Length The non-dimensional set of equations with the dimensionless groups derived is: where The Dimensionless numbers in the above equations are defined in Table 2. Dimensionless Number Definition Reynold number Coefficient of Permeability [36] Gr n = The dynamic viscosity and thermal conductivity are The equation of state and total energy is Numerical Procedure and Model Validation The compact finite difference scheme is used for spatial discretization [37]. The solution is solved and advanced in time using the 3rd Runge-Kutta method. Each time step is calculated based on the Courrant-Friedrichs-Lewy condition. The CFL value is given to be a constant value that should range between 0.3-0.7 depending on the numerical stability. The independence test was carried out for the non-isothermal MHD flow. The 49 × 149 resolution was found to be optimal, since the changes in averaged velocity and temperature at the center of the domain were less than 0.5% when finer resolutions were implemented. The numerical model was validated against previously published work [25]. The focus in the previous study was to investigate the coupled effects of variable properties and magnetic force on thermal and flow processes. The comparative results of the steady state velocities at half of the channel for a varied Hartman number is shown in Figure 3. Excellent agreement is achieved, as the difference between the two results appears minimal. Numerical Procedure and Model Validation The compact finite difference scheme is used for spatial discretization [37]. The solution is solved and advanced in time using the 3rd Runge-Kutta method. Each time step is calculated based on the Courrant-Friedrichs-Lewy condition. The CFL value is given to be a constant value that should range between 0.3-0.7 depending on the numerical stability. The independence test was carried out for the non-isothermal MHD flow. The 49 × 149 resolution was found to be optimal, since the changes in averaged velocity and temperature at the center of the domain were less than 0.5% when finer resolutions were implemented. The numerical model was validated against previously published work [25]. The focus in the previous study was to investigate the coupled effects of variable properties and magnetic force on thermal and flow processes. The comparative results of the steady state velocities at half of the channel for a varied Hartman number is shown in Figure 3. Excellent agreement is achieved, as the difference between the two results appears minimal. Results and Discussion To investigate the transient effect, the numerical solutions were extracted at four different times for the Reynolds number 0 = 2300, Prandtl number 0 = 0.44, and Mach number Ma 0 = 0.3. Figure 4 shows a mass flow rate that changes with time for cases both without (N = 0) and with (N = 10) magnetic effect. It is clearly seen that flow through a magnetic field propagates much more slowly than flow without a magnetic field. For the case where N = 10, fluid motion is retarded by an electromagnetic force, the so-called Lorentz force, exerted on a charge particle. Results and Discussion To investigate the transient effect, the numerical solutions were extracted at four different times for the Reynolds number Re 0 = 2300, Prandtl number Pr 0 = 0.44, and Mach number Ma 0 = 0.3. Figure 4 shows a mass flow rate that changes with time for cases both without (N = 0) and with (N = 10) magnetic effect. It is clearly seen that flow through a magnetic field propagates much more slowly than flow without a magnetic field. For the case where Thermal behavior of the two cases was investigated via the time evolution of temperature contours, illustrated in Figure 5. As can be seen, although temperature stratification mainly evolves downward from the upper wall, heat is transported primarily in the flow direction, from the left to the right domain. As evident in Figure 4, the flow is slowed down with a magnetic effect (N = 10), causing a thicker thermal boundary layer. Thermal behavior of the two cases was investigated via the time evolution of temperature contours, illustrated in Figure 5. As can be seen, although temperature stratification mainly evolves downward from the upper wall, heat is transported primarily in the flow direction, from the left to the right domain. As evident in Figure 4, the flow is slowed down with a magnetic effect (N = 10), causing a thicker thermal boundary layer. Thermal behavior of the two cases was investigated via the time evolution of temperature contours, illustrated in Figure 5. As can be seen, although temperature stratification mainly evolves downward from the upper wall, heat is transported primarily in the flow direction, from the left to the right domain. As evident in Figure 4, the flow is slowed down with a magnetic effect (N = 10), causing a thicker thermal boundary layer. Hereafter, the focus will be on the case of the non-isothermal flow subjected to magnetic force (N = 10). Figure 6 shows the change over time of mass flow rates. The values along the lines crossing at different transverse locations are plotted as well. With respect to the center line of the channel, it is clear that the profile of the flow rate is not symmetric; the non-uniform fluid density causes a mass flow rate greater in the bottom domain than in the upper domain. Hereafter, the focus will be on the case of the non-isothermal flow subjected to magnetic force (N = 10). Figure 6 How the fluid density varies can be seen in Figure 7. As seen in this figure, the density is lower towards the top left corner of the domain, where the temperature is high. The As briefly mentioned, temperature distributions are depicted throughout the channel in Figure 8. Additionally, temperatures spread more uniformly downstream. With regard to the local wall shear stress relative to dynamic pressure, Figure 9a presents the skin friction coefficient, which changes with the channel distance. As briefly mentioned, temperature distributions are depicted throughout the channel in Figure 8. Additionally, temperatures spread more uniformly downstream. With regard to the local wall shear stress relative to dynamic pressure, Figure 9a presents the skin friction coefficient, which changes with the channel distance. The skin friction coefficient increases significantly as x increases near the channel entrance, owing to the significant increase in the velocity gradient. The value gets larger for a greater Reynolds number as the velocity gradient gets larger, while the boundary layer becomes thinner. Further, the friction coefficient is greater for the upper wall than for the lower wall, indicating a larger velocity gradient at the top wall. This result, which is not axially symmetric, is found to be consistent with the results shown in the previous figures. In order to evaluate the heat transfer enhancement via convected thermal energy, the Nusselt number ( ) is computed and plotted for different 0 in Figure 9b; it can be seen that the decreases downstream due to the thicker thermal boundary layer. The , however, rises with an increased Re, since the thermal boundary layer is in turn thinner. Figure 10a,b illustrates the effects of Pr on and Nu, respectively. The friction coefficient is increased with decreased Pr due to a higher ratio of momentum diffusivity to thermal diffusivity. This causes a higher velocity gradient along the walls. However, this effect on can be considered small. On the other hand, Nu is increased with Pr due to the greater inertial force of fluid flow. The skin friction coefficient C f increases significantly as x increases near the channel entrance, owing to the significant increase in the velocity gradient. The C f value gets larger for a greater Reynolds number as the velocity gradient gets larger, while the boundary layer becomes thinner. Further, the friction coefficient is greater for the upper wall than for the lower wall, indicating a larger velocity gradient at the top wall. This result, which is not axially symmetric, is found to be consistent with the results shown in the previous figures. In order to evaluate the heat transfer enhancement via convected thermal energy, the Nusselt number (Nu) is computed and plotted for different Re 0 in Figure 9b; it can be seen that the Nu decreases downstream due to the thicker thermal boundary layer. The Nu, however, rises with an increased Re, since the thermal boundary layer is in turn thinner. Figure 10a,b illustrates the effects of Pr on C f and Nu, respectively. The friction coefficient C f is increased with decreased Pr due to a higher ratio of momentum diffusivity to thermal diffusivity. This causes a higher velocity gradient along the walls. However, this effect on C f can be considered small. On the other hand, Nu is increased with Pr due to the greater inertial force of fluid flow. The other important parameter considered is the Stuart number, N, which is the influence of magnetic force relative to the inertial force. The respective effects of N on C f and Nu are given in Figure 11a,b. The friction coefficient is found to vary with N near the channel entrance in the developing region. It is interesting that the trend reverses beginning at around one-fifth of the total channel length as C f decreases with an increased N. This means that the wall shear stress dominates near the flow entrance for high values of magnetic force, but gets lower farther downstream. In Figure 11b, Nu reveals the same N dependent trend as C f . When N increases, Nu also increases at the entrance region; however, it in turn decreases downstream towards the fully developed region. In this case, heat convection is enhanced with a higher magnetic effect near the entrance, but it is weakened substantially farther towards the channel exit. Figure 10a,b illustrates the effects of Pr on and Nu, respectively. The friction coefficient is increased with decreased Pr due to a higher ratio of momentum diffusivity to thermal diffusivity. This causes a higher velocity gradient along the walls. However, this effect on can be considered small. On the other hand, Nu is increased with Pr due to the greater inertial force of fluid flow. The other important parameter considered is the Stuart number, N, which is the influence of magnetic force relative to the inertial force. The respective effects of N on and are given in Figure 11a,b. The friction coefficient is found to vary with N near the channel entrance in the developing region. It is interesting that the trend reverses beginning at around one-fifth of the total channel length as decreases with an increased N. This means that the wall shear stress dominates near the flow entrance for high values of magnetic force, but gets lower farther downstream. In Figure 11b, reveals the same N dependent trend as . When N increases, also increases at the entrance region; however, it in turn decreases downstream towards the fully developed region. In this case, heat convection is enhanced with a higher magnetic effect near the entrance, but it is weakened substantially farther towards the channel exit. Conclusions In this paper, unsteady flow and heat transfer of compressible helium-xenon binary gas through a porous channel subjected to a transverse magnetic field has been numerically investigated. The channel walls are assumed to be non-conducting and maintained at two different temperatures. The electrically conducting binary gas has variable thermal conductivity and viscosity which are the functions of temperature. The effects of magnetic interaction, compressibility and variable fluid properties are examined through parametric dimensionless groups, namely the Reynolds number ( 0 ), Prandtl number ( 0 ) and Stuart number (N). The results show that the magnetic field, compressibility and variable fluid properties considerably affect heat and fluid transport. Variations of density with temperature and pressure result in an asymmetric mass flow profile. Furthermore, the friction coefficient is greater at the upper wall than at the lower wall due to larger velocity gradients along the top wall. The other findings of this study are as follows: • The friction coefficient increases for a greater Reynolds number, as the velocity gradient gets larger while the boundary layer becomes thinner. The Nusselt number Conclusions In this paper, unsteady flow and heat transfer of compressible helium-xenon binary gas through a porous channel subjected to a transverse magnetic field has been numerically investigated. The channel walls are assumed to be non-conducting and maintained at two different temperatures. The electrically conducting binary gas has variable thermal conductivity and viscosity which are the functions of temperature. The effects of magnetic interaction, compressibility and variable fluid properties are examined through parametric dimensionless groups, namely the Reynolds number (Re 0 ), Prandtl number (Pr 0 ) and Stuart number (N). The results show that the magnetic field, compressibility and variable fluid properties considerably affect heat and fluid transport. Variations of density with temperature and pressure result in an asymmetric mass flow profile. Furthermore, the friction coefficient is greater at the upper wall than at the lower wall due to larger velocity gradients along the top wall. The other findings of this study are as follows: • The friction coefficient C f increases for a greater Reynolds number, as the velocity gradient gets larger while the boundary layer becomes thinner. The Nusselt number (Nu) rises with an increased Re 0 , caused a thinner thermal boundary layer. • The friction coefficient C f is increased with a decreased Prandtl number Pr 0 , due to a higher ratio of momentum diffusivity to thermal diffusivity. However, this effect on C f can be considered small. On the other hand, Nu is increased with Pr 0 , since the inertial force of fluid flow is larger. • The friction coefficient C f is found to vary with the Stuart number N near the channel entrance in the developing region. However, this trend reverses at a certain distance inside the channel, as C f decreases with an increased N. As N increases, Nu also increases at the entrance region; however, it in turn decreases downstream towards the fully developed region.
5,552.4
2021-11-01T00:00:00.000
[ "Physics" ]
STEM technology-based model helps create an educational environment for developing students' technical and creative thinking For successful technology adaptation today, individuals need not so much acquired experience and knowledge as certain personality traits in the form of skills, competencies, and abilities for collaborative problem solving, as well as achievement motivation and self-development. The purpose of this study was to develop and test a model for the formation of personality traits associated with the development of technical and creative thinking. The study was conducted using the modeling method and a psychodiagnostic approach based on the characteristics of creative thinking. An experimental study was conducted with a sample of 120 students from Plekhanov Russian College of Economics. The age range of the respondents was from 19 to 21 years. The results showed 1) the characteristics and dynamics of students' value systems and creative thinking, 2) a developed program for the development of intrinsic motivation, 3) a model for designing a pedagogical environment for students' engineering and creative thinking in education STEM; 4) testing the developed programs and models. The results also showed that there is a statistically significant relationship between the development of students' intrinsic motivation and the reorientation from normative-limited to creative-free thinking. Considering the results of this study, it was concluded that the model developed by the authors helped to shape and develop students' engineering and creative thinking. Implications for further research and teaching are drawn. INTRODUCTION The main feature of today's society is a high level of changes and a large amount of corresponding information, which changes the complexity of requirements for a person's personality. For successful adaptation today a person needs not so much acquired experience and knowledge, but certain personality traits expressed in the formation of skills, acquisition of competencies and development of the ability to solve problems together, as well as motivation for achievement and self-development (Gafurov et al., 2020). / 12 Modern pedagogy uses practice-based learning to develop in students the professional skills that employers desperately need and to develop an understanding of where, how, and why the skills they acquire are applied in practice. Developing critical thinking skills is considered an important educational goal and has gained greater recognition in recent years (Tavukcu et al., 2020). Critical thinking is disciplined, self-directed, and self-regulated thinking that demonstrates the mental abilities appropriate to a particular mindset or domain. Within the STEM training, psychological and educational skills are developed to help freely choose ways to solve the problem being discussed. Practice-based learning influences both the activity and emotional intelligence of the individual and has a system of means, forms and methods that contribute to the educational activities of students by involving them in real professional conditions. Using critical thinking strategies can also prepare students for the rigors of college life and help them develop the skills they need for successful employment. Developing critical thinking skills helps students solve real-world problems and think with an open mind. To develop critical thinking in students, the future teacher must be able to recognize student responses, provide timely feedback, and apply an individualized approach as much as possible (Cortázar, et al., 2021;Kareem, Thomas, & Nandini, 2022). Most importantly, however, is to develop one's own critical thinking skills. To achieve this goal, the STEAM and STEM curriculum encourages students to combine scientific, technical, engineering, artistic, and mathematical knowledge in the form of group instruction and experimental research, as well as to acquire the skills needed in today's society (Akiri et al., 2021;Alsmadi, 2020;Bahrum et al., 2017;Hashemi et al., 2015;Nourooz et al., 2015;Salakhova et al., 2021). At the current stage of development of higher education in Russia, the authors consider the idea of using STEM in the educational process and the idea of extensive use of intelligent technologies in practiceoriented training of future teachers. In this context, education STEM is important for prospective teachers as they can use modern digital technologies and a practiceoriented approach to teaching students. Psychological orientation of professional training of future teachers includes formation of personal and professional views and humanistic ideas about educational process in general. Many empirical studies have examined the effects of various teaching strategies and interventions on the development of students' critical thinking skills (De Meester et al., 2021;Kelley & Knowles, 2016;Loyalka et al., 2021;Ma, 2021;Park & Nuntrakune, 2013;Sabirova & Deryagin, 2018;Parks et al., 2021). In this study, the teaching methods of STEAM are used as a tool to develop critical thinking in future teachers. This teaching method is an effective tool for developing thinking because it allows future teachers to use their own experiences and information, identify the strengths and weaknesses of their personality, and build a developmental path for their development as future teachers. The purpose of the study was to determine the degree of effectiveness of using STEAM in training and developing future teachers' critical thinking skills. To achieve this goal, the following objectives were established: 1. theoretical and methodological analysis of the research problem and development of a theoretical model for designing the pedagogical environment for students' technical and creative thinking; 2. development and testing of a model for the formation of personality traits in students related to the development of technical and creative thinking; 3. examining the characteristics and dynamics of value systems and creative thinking in college students; 4. developing and testing an additional course "STEAM -Education" for students. THEORETICAL ANALYSIS An effective means of intellectual development, formation of motivation for learning activities and Contribution to the literature • This study presents a model developed for shaping the educational environment for students' engineering and creative thinking based on STEM technology. • This study provides 1) an investigation of the characteristics and dynamics of value systems and creative thinking in students; 2) the development of a program for the development of intrinsic motivation; 3) the confirmation of the developed programs and models for the formation of the educational environment for students' technical and creative thinking based on the STEM technology. • This study proves that it is necessary to stimulate the development of social and pedagogical programs in education, as well as to develop comprehensive programs to adapt the STEM approach in the higher education system. • The study of the results proves that through the practical implementation of STEM technologies in the system of higher vocational education, it is possible to develop technical and creative thinking in students. 3 / 12 development of creativity is scientific and technical creativity, as well as the introduction of innovative subprograms aimed at the development of a child's personality (Murodkhodzhaeva et al., 2021). There is a need to implement STEM education by integrating knowledge into solving applied urgent problems in project groups, and there is a shortage of trained STEM professionals in education to organize a new approach to learning, as well as a detailed description of how to implement the approach in an educational organization (Panyushkin, 2021). This confirms the need to develop new methods of work and include them in the program of psychological and educational support for student self-determination. However, if competent and comprehensive work on career guidance is carried out, paying due attention to the formation of motivational factors for high school students in career choice, it is possible to achieve positive development in career choice (Andreeva et al., 2021). For example, a study by researchers in Italy found that children's preference for spatial toys and spatial sports promotes spatial thinking skills, which contributes to the successful inclusion of engineering thinking in STEM programs based on a project-based and interdisciplinary approach (Moè, Jansen & Pietsch, 2018). Learning spatial reasoning contributes to a successful STEM career in a person's life (Jeng & Liu, 2016). Moreover, in the context of STEM technology, the conditions for independent learning, on the one hand, and a friendly and supportive environment, on the other hand, are created, which increases the motivation for independent problem solving (León et al., 2015). An analysis of the results of a study by researchers from China showed that children develop mathematical skills, especially spatial reasoning, which is necessary for future engineers, especially at the age of 5-6, through the approach of STEM (He et al., 2021). It was found that boys' choice of STEM subjects was based on their interest in the field, while for girls this choice was determined by their confidence in their mathematical abilities (Sakellariou & Fang, 2021). Along the same lines, the results of a study conducted by a group of researchers from Switzerland have shown that by fostering students' interest in mathematics and science, the inherent value of mathematics and science and the likelihood of choosing a STEM career in the future increase (Aeschlimann et al., 2016 ). STEM forms high motivation in students, and independence in decision making contributes to the formation of appropriate conditions for creativity (Vanykina & Sundukova, 2020). Through STEM education, students can delve deeper into the logic of ongoing phenomena, understand their interrelationships, study the world systematically, and thereby develop curiosity, an engineering style of thinking, the ability to get out of critical situations, develop teamwork skills, and master the basics of management and self-presentation (Lazareva & Marchuk, 2019). Critical thinking is one of the driving forces of science in general, and in modern science there are many perspectives to take a new look at the existing reality and approach to discoveries in the field of science (Chaika, 2017). Learners from the basics of professional activity, the skills of a scientific, systemic approach to solving specific educational problems related to the design of educational programs. Students are able to delve into the logic of the phenomena studied and understand their interrelationships and consistency. Thus, they develop an engineering style of thinking, the ability to get out of critical situations, teamwork skills, management and self-presentation skills (Zubenko & Sukhova, 2018). For example, in a study conducted in the United States, it was found that the relationship between personal characteristics such as cognitive ability and independence was a very important factor for success in the field STEM, and that the personal characteristic of sociality provided the opportunity to successfully adapt in the organizational environment (Fagan et al., 2019). Emotional intelligence was found to have a significant impact on the effectiveness of STEM training (Ferguson & Austin, 2011). A similar study has shown the cognitive impact of interests and current intentions on the success of STEM education (McIntyre, Gundlach & Graziano, 2021). In a large sample, intrinsic motivation was found to positively impact high student achievement in STEM courses, warranting a problem-based approach to student learning (Botnaru et al., 2021). And based on the analysis of differential equation solving results of students who participated in STEM courses, it was found that there were certain thinking patterns that negatively affected the mastery of new equation solving methods that were influenced by previous experiences (Stratton, 2021). According to STEM, when creating a robot, a student can deal with concepts such as the coordinate axis, angles, curves, and even the basics of neural networks (mathematics), algorithms for finding the shortest path in the shortest possible time (computer science, energy and time saving), different sensors based on basic mechanical, optical, electromagnetic laws (physics). Such a topic is also a technical, design and sometimes artistic task (Kostina & Gladkikh, 2019). With regard to STEM education, we can conclude that the expediency of its use to maintain the effectiveness of teaching and processes, to adapt the current electronic information and educational environments to new conditions, to ensure the productive compatibility of educational and educational work in the digital educational environment of educational institutions of all levels (Aniskin et al., 2019). An analysis of the professional and academic careers of students in England showed the growing interest in subjects related to mathematics and science and their further preference for STEM careers (Banerjee, 2016). In this context, the results of a theoretical analysis of students' expectations allowed us to theoretically support and empirically confirm the influence of biological, sociocultural, and psychological factors on the motivational basis for choosing a STEM career from the perspective of individual and gender differences (Wang & Degol, 2013). Moreover, the analysis of personality profiles showed the influence of motivation on high achievement in mathematics and science (Fong et al., 2021). The results of a similar study of scientists from the United States showed the possibility that students develop an identity associated with science during their studies, which determines their trajectory in terms of pursuing deeper study of science and choosing STEM in the professional field (Robinson et al., 2019). It should be noted that only teachers who have received special education or additional professional training are able to work in a unified system of scientific disciplines and technologies (STEM education) (Chemenkov & Krylov, 2015). Training future teachers using the latest developments in the field of STEM education can improve the quality of education for the younger generation and solve the problem of shortage of qualified teachers who are ready to organize the educational process by using modern equipment and educational technologies for engineering skills of students (Marinyuk & Serebrennikova, 2018). In this regard, certain beliefs of teachers who want to integrate the STEM approach into their courses form the normative component of STEM education (Pryor et al., 2016). In a recent study, it was found that a teacher who cannot distinguish between the racial or ethical background of his or her students is more objective in his or her assessment in STEM (Good et al., 2020). Therefore, students who take STEM courses are convinced of the social usefulness of their activity (Steinberg & Diekman, 2018). However, the gender factor plays some role in the selection of tasks in STEM, as boys are more likely to choose technological tasks and girls are more likely to choose tasks related to art and design (Farrell & McHugh, 2020). In general, it is possible to develop interactive courses available on the Internet that have a long-term impact on skill development STEM (Dreessen & Schepers, 2019). Recently, there have been many projects aimed at introducing STEM education in schools at different levels. This situation shows that the modern education system has responded in practice to the needs of society in the context of Industry 4.0 (Bogdanova, 2018). In a STEM module, students have the opportunity to identify the problem, determine the importance of the topic, the objectives and hypotheses of the study, conduct an experiment, analyze and evaluate the results or conduct an engineering (IT -project) aimed at solving applied technical cases of specific companies (Konyushenko et al., 2017). In this regard, it is possible to train teachers who can give high-quality STEM education to the young generation (Grigoriev et al., 2018). The Methodological Basis of this Study was the Modeling Method and the Psychodiagnostic Approach The study included 5 phases: Phase 1: the theoretical and methodological analysis of the research problem and the development of a theoretical model for designing the educational environment for students' technical and creative thinking. Phase 2: The survey experiment -studying the characteristics and dynamics of students' value systems and creative thinking before the introduction of the intrinsic motivation value development program. Phase 3: The formative experiment -the introduction of value development programs of intrinsic motivation into the educational process. Phase 4: The control phase -confirmation of the changes in the value systems and the level of creative thinking of students. Phase 5: The analysis, synthesis and generalization of the obtained results: the formulation of conclusions. The experimental study was conducted at the Dimitrovgrad Institute of Engineering and Technology, a subsidiary of MEPhI, the Financial College of the Government of the Russian Federation, and the Plekhanov Russian College of Economics. The study involved 120 students between the ages of 19 and 21. The control and experimental groups consisted of a homogeneous gender and age composition with 60 boys and 60 girls in each group. The S. Schwartz Value Questionnaire was used as a diagnostic instrument, which allows identifying the main value-motivating areas of young people's personalities (Schwartz, 2012). The F. Williams Test was used to measure the cognitive component associated with creativity (Tunick, 2003). The t-test was used to analyze the empirical data. RESULTS AND DISCUSSION Based on the results of the study obtained after the theoretical and methodological analysis, we applied the modeling method, which made it possible to develop a model for the design of the educational environment for engineering and creative thinking of students. In the model, five structural and functional blocks and their interrelationships were identified. In the goal block, the goal is specified in the tasks, which divide the subsequent blocks into cognitive, motivational, and active components. The identified components made it possible to reveal the basic concepts in the content block, systematically distribute the principles and approaches in the methodological block, determine congruent methods and means in the technological block, and identify the criteria for training students' technical and creative thinking in the evaluative and effective block. Modeling the pedagogical process of forming engineering and creative thinking in students through the STEM technology allows the development of a structural and functional model that includes the following sections: Main goals, objectives, principles, approaches, methods, connections and structural components in the modeled process. Taking into account the specifics of children's innovative education and technological activities, as well as the age characteristics of students, the following 5 blocks were included in the structural-functional model: the goal, the content, the methodology, the technology, and the evaluativeeffective part (see Table 1). Considering the developed structural and functional model of technical and creative thinking, it should be added that the social institution of higher education in contemporary Russia is increasingly characterized by the need to introduce such a pedagogical process, the goal of which is to form a harmoniously developed personality of a young person. This trend explains the desire to integrate a humanistic approach into the pedagogical technologies of higher professional education, which, in addition to the acquisition of knowledge by students, means the integration of valuebased education into the educational process (Mackay et al., 2021). All this is determined by the fact that, as a result, a college graduate is a person who can adapt to the requirements of contemporary society and, accordingly, to the value system in force in this society. At the same time, internalization of social values by an individual is a long, multi-stage process, in which adolescence occupies the last stage, which determines its importance for the formation of a harmonious Table 1. Structural and functional model of the formation of engineering and creative thinking of students through STEM technology TARGET BLOCK Purpose the formation of engineering and creative thinking of students through STEM technology Objectives 1. formation of a system of skills, knowledge and creative skills of students in the field of engineering activity using the STEM technology. 2. creating a special creative environment that takes into account the age characteristics of students and encourages them to take up engineering activities. 3. developing students' creative skills in creating and implementing engineering projects. 4. career guidance for teenagers. Motivation-Value Component Activity Component Cognitive Component Willingness to introspect mental activity. Willingness to make new hypotheses and formulate the conditions of the problem, the implementation of appropriate transformations. Training of mental actions such as analysis, planning and introspection of mental activity within the framework of specially organized educational activities. Readiness for project activities. Willingness for self-realization and continuous self-improvement in the field of technology. The ability to think creatively based on STEM education, finding different ways to solve a problem of a certain type using special methods of organizing mental activity, incorporating visualfigurative associative thinking in the process of teaching STEM technologies, solving scientific problems using heuristic methods of thinking, using intellectual collective creativity, the ability to apply the latest resources and technical means of engineering activity. Knowledge of how to find solutions to engineering problems; knowledge of engineering; understanding of the engineering profession; understanding of technical and creative thinking; indicators of developing imagination, curiosity, intellectual ability, visual thinking, fluency of thought, flexibility, originality of thought, divergent thinking. BLOCK Principles Approaches Techniques 1. Individualization and taking account of age characteristics 2. Flexibility and originality of thinking, creativity 3. Meta-subject 4. Links between theory and practice 5. Creative initiative and consciousness 6. Illustration (Luneva et al., 2020). The most important personal constructs of this age period are the formed self-concept, self-esteem, and a system of value orientations that harmonize a person's relationship with himself, with the people around him, and with society in general (Purvis et al., 2020). At the same time, as R. Cover (2021) notes from Australia, the transformation of value self-determination can lead to both an increase in selfesteem and an underestimation of self-esteem (Cover, 2021). And as for the educational process at the college, the formation of the value bases of the student's personality as a future specialist able to work effectively in the modern world is characterized precisely by the humanization of the educational process. On the other hand, modern scientific and technological progress is characterized by constant innovation and reaches a level that requires a specialist to develop continuously and, accordingly, to grow professionally and personally. In turn, modern pedagogy provides young people with opportunities, thanks to which they are able to fully meet the demands that today's society places on them. However, the decreasing humanitarian component of higher education should be noted, which may negatively affect the innovative component of the educational process (Merzlyakova et al., 2020;Rikel, 2020;Shaidullina et al., 2018). Finally, as stated by a group of researchers from High Point College (USA), choosing the right pedagogy is the first step to a student's proper acquisition of knowledge (Sahagun et al., 2021). In this context, it is necessary to apply a systematic approach to the formation of a system of value orientations in the personality of a young person. This is determined by the fact that in the value-normative system social, social-psychological and psychologicalpedagogical relations of the individual are interconnected, the core of which is the understanding of one's purpose in life, the formation of a worldview and orientation in one's life. Thus, the implementation of a systematic approach makes it possible to uncover the spiritual and creative potential of a young person's personality, and the necessary pedagogical conditions for the implementation of this task can be a methodological basis aimed at the creative development of the personality and, consequently, the development of creative thinking (Shmeleva, 2020). This basic approach, which combines the features of creative thinking in the high motivation to solve educational problems and value self-determination expressed in the high importance of cognition, was the basis of the developed program for value development of young people. The value development program is based on the assumption that developing intrinsic motivation for activities increases the level of creative thinking. Within the framework of the program, an analysis of the life path of young people was carried out, through which the personal characteristics of significant people associated with important events of young people were revealed (G. Kelly's Theory of Personality Constructs, 1963). In the program, through a series of theoretical and practical lessons, young people are invited to take on the roles of internally motivated personalities, giving them a new experience of personal development and increasing their motivation for creative activities. In this regard, the program is a pedagogical technology aimed at developing motivation for creative activity, which allows studying the impact of motivation development Multimedia tutorials (presentations, websites), tutorials, visual aids (video films, mind maps, infographics, designs, models) hardware (projector, tablet, computer equipment, mobile devices), robotic complex (Lego and others), game development, project activities based on the interpreted Python language, Tkinter GUI module, and other software tools. AND EFFECTIVE BLOCK Criteria for assessing the results achieved by schoolchildren: 1. optimum level 2. sufficient level 3. insufficient level Levels of the formation of engineering and creative thinking of schoolchildren: 1. high 2. medium 3. low Result a high level of the formation of engineering and creative thinking of students 7 / 12 on value self-determination and the relationship with creative thinking (Vershinina & Ilyushkina, 2020). All this is in line with the sociocultural theory of creative self-determination developed by Danish scholars V. P. Glaveanu and L. Tanggaard (2014), who point to the relationship between creative thinking and personal selfdetermination expressed in attitudes toward oneself, others, and society. To test this thesis, it was proposed to conduct an experimental implementation of the values development program in groups of students as part of the educational process. In order to analyze the dynamics of changes in the indicators of values and the indicators of creative thinking, a comparative analysis was conducted using Student's statistical t-test. The result of this analysis was that there were no statistical differences in the indicators before and after the experiment in the control group of boys and girls ( Table 2). The absence of changes in the dynamics of the importance of value systems and indicators of creative thinking in the control group of boys and girls is explained by the absence of socio-psychological factors influencing these groups; the activities of these groups took place in the conditions of a normal educational process. A comparative analysis of the indicators of values and creative thinking in the experimental group before and after the experiment revealed statistically significant differences in the values and creative thinking scores of boys and girls. The obtained data show that the ongoing social pedagogical program had an impact on the observed dynamics of the importance of values and indicators of creative thinking. At the same time, the increase in the importance of the value pleasure can be explained by the release of intrinsic motivation, which is accompanied by a decrease in the restrictions in one's life, which is also manifested in a decrease in the importance of normatively oriented values, such as the values conformity, support of traditions, sociality and social culture. At the same time, it is worth noting that both boys and girls in the experimental group decrease the importance of the value of security, which is also associated with a reorientation towards a creative approach to life. In this regard, the analysis of one's values and their reorientation towards an internally motivated activity also has a significant effect on the cognitive component of the activity, which is reflected in an increase in the level of the main indicators of creative thinking. The indicators of flexibility, elaboration and verbal creativity also increased in the girls, but it is worth mentioning. The indicators of fluency and originality were initially at a high level in the girls. CONCLUSION The results of the study show that the socioeducational program has an impact on the observed dynamics of the importance of values and indicators of creative thinking. At the same time, there is a shift away from norm-oriented values towards a transgression of the limiting framework, which is consistent with general theoretical ideas about the creative approach and creative personality. A creative approach to one's life activities increases both enjoyment of life and more productive solution of creative problems. Our study proves that value structures and cognitive structures are not significantly related (correlations were not significant). It should be noted that the relationship found has a non-linear structure, which conditions the absence of statistically significant differences in the control group sample at all stages of the experimental study. However, our results indicate the relationship between mental structures and value-motivational structures. The discovered relationship is expressed in the dependence of the development of the level of creative thinking on the values of the individual, which can be explained by the orientation of students towards creativity, the desire for self-development and selfactualization. The main activity of college students is educational and vocational (Vygotsky, 1978), which determines the dominance of motives oriented to cognition, creativity, and personal development. Thus, the results show that there is a statistically significant relationship between the development of the individual's internal motivation and the reorientation from normative-limited to creative-free type of thinking in students. It has been shown that the model developed by the authors, based on the technology of STEM, shapes and develops the engineering and creative thinking of students. The obtained results can be used in the development of educational programs both in the system of higher and secondary schools. In addition, it is worth noting that the problem of the relationship between value-motivation structures and students' creative thinking should be further pursued to uncover the factors that contribute to a creative approach in the educational process. At the same time, the observed relationship between the value of spirituality
6,664
2022-04-21T00:00:00.000
[ "Education", "Engineering" ]
Construction of Cognitive Maps to Improve Reading Performance by Text Signaling: Reading Text on Paper Compared to on Screen Reading text from a screen has been shown to be less effective compared with reading text from paper. Various signals may provide both background information and navigational cues, and may promote the construction of cognitive maps during on-screen reading, thus improving reading performance. This study randomly divided 75 college students into a paper reading group and an on-screen reading group. Both groups were tested for navigation and reading comprehension in response to three different forms of signaling (plain text, physical signaling, and verbal signaling). The results showed that when plain text was presented, the navigation and comprehension scores of the paper reading group were significantly higher than those of the on-screen reading group. However, no significant difference was found between both groups under signaling conditions. The navigation and comprehension scores of both groups were significantly higher under signaling conditions than under plain text. Moreover, the comprehension score of the on-screen reading group under physical signaling was significantly higher than that under verbal signaling. This research suggested that signals help to construct cognitive maps and effectively improve reading performance. Besides, physical signaling, such as underlining and bold formatting, is more effective for on-screen reading. The present study provides a practical and effective approach for improving on-screen reading based on cognitive map theory. INTRODUCTION As a result of the development of electronic technologies, reading has become increasingly digitized. The popularity of reading on digital reading devices, such as iPad and Kindle, has significantly decreased visual fatigue and operational discomfort during on-screen reading (Lin et al., 2008). Therefore, readers are satisfied with such on-screen reading devices for reading prose, which does not require the application of active reading strategies (Thayer et al., 2011). However, on-screen reading seems to be only appropriate for reading prose, such as novels. When reading more complicated and challenging texts such as expository text or technical content, on-screen reading remains insufficient and has been shown to be unsuitable (Liu, 2005;Clinton, 2019). Therefore, this study used expository text as research material to explore how the on-screen reading performance of such texts can be more effectively improved. Cognitive Map The idea of a cognitive map originates from the theoretical research of psychology on spatial cognition. It is a cognition form that represents environmental information, and it is a similar model to the field map formed in the brain based on past experience (Yang and Bi, 2005). When the human brain collects visual information about an object, it also collects information about its surroundings and connects them together (Jabr, 2013;Li et al., 2013). As a result, when people read a text, not only the words and semantics of the text but also the physical location and background information of the text enter the brain for processing as a whole, forming a cognitive map of the text (Payne and Reader, 2006;Hou et al., 2017a). Similar to how a physical landscape is remembered, readers form a cognitive map of the physical location of text segments on a page (Hou et al., 2017b). During the reading process, readers first identify "landmarks, " namely, important concepts, knowledge, or information. Then, they construct routes between the landmarks, i.e., front and back, far and near, as well as hierarchical relationships between concepts, knowledge, or information in logical and spatial positions. Finally, they integrate these landmarks and relationships into survey knowledge, i.e., build textual cognitive maps (Foo et al., 2005;Voeroes et al., 2011). Based on this, cognitive maps in the reading area can be identified as the mental representation of the structure of a text and its background context that are constructed by readers during reading (Thayer et al., 2011;Li et al., 2013;Hou et al., 2017a,b). The construction of such cognitive maps not only helps to locate the content that has been read, but also leads to more effective retention and recall of text information (Rothkopf, 1971;Lovelace and Southall, 1983;O'Hara et al., 1999;Morineau et al., 2005). According to cognitive map theory, whether a text presentation can promote the formation of a cognitive map of the text structure is the key factor that influences reading outcomes (Hou et al., 2017a,b). During paper reading, the provision of rich background information helps the formation of knowledge landmarks (Li et al., 2016), which readers can use to locate information and associate its physical position in the text with the logical order of its contents (Li et al., 2013;Mangen et al., 2019), thus forming survey knowledge. However, because of the lack of background information and navigational cues during on-screen reading (Jabr, 2013;Li et al., 2013) as well as the loss of spatial knowledge about the location of specific content, readers are unable to attain an overall grasp of the text structure, which thus obstructs their construction of an effective mental map (Morineau et al., 2005;Payne and Reader, 2006;Rose, 2011;Thayer et al., 2011). Consequently, an important question is how on-screen text can be better displayed to help readers construct cognitive maps and thus improve their on-screen reading performance. Li et al. (2013) developed an e-reader that combines maps of visual cues and two reading strategies and found that maps of visual cues can help readers to construct cognitive maps during on-screen reading, which promotes navigation and reading comprehension. Hou et al. (2017a) suggested that as long as the text presentation for on-screen reading completely imitates that for paper reading, it is conducive to the construction of cognitive maps, and readers' performance during on-screen reading tasks can be improved. Tang et al. (2020) designed an iReader digital reading program that is equivalent to paper text for cognitive map construction conditions. They found that under these specific conditions, no difference was found between on-screen and paper reading performances. These studies indicated that as long as the conditions for the construction of cognitive maps are similar to those for paper texts, the on-screen reading performance can be improved. However, these studies mainly help readers to construct cognitive maps and improve their on-screen reading performance by utilizing reading software and technology. Since cognitive maps are formed during the processing of textual information by a reader, it is possible to identify effective ways to construct cognitive maps based on reading behaviors and habits. The research on signals inspired our discussion of this issue. Signals In the process of reading, to effectively complete the reading task, learners usually adopt certain reading strategies to master the content of reading materials and solve problems in reading. Text signaling is one of the most used reading strategies (Li et al., 2016). Text signals include words, phrases, sentences, or special symbols that can appear in different places within a text, but rather than adding any new content, they emphasize the structure or specific content of the text (Britton et al., 1982;Lorch, 1989;Van Gog, 2014). The signaling promotion effect is defined as the promotion effect of text signals on comprehension processes and information retention of a text (Lorch et al., 1993;Lorch and Lorch, 1996;He and Mo, 2000). In multimedia learning, it is also known as the signaling principle or cueing principle, and it refers to the finding that people learn better when signals are added that guide attention to certain elements of the material or highlight the structure (Mayer, 2005;Van Gog, 2014). Signaling forms mainly consist of physical signaling and verbal signaling. Physical signaling is defined as emphasizing important information and words mainly by highlighting, underlining, and bold formatting. Verbal signaling includes headings, summaries, and organizing charts (He and Mo, 2000;Mayer, 2005). Organizing charts for verbal signaling utilize a visual method to analyze and compare keywords, concepts, or central sentences within a text, determine their hierarchical relationship, and present the main structural framework of the text in the form of a network (Du et al., 2006). It has been reported that organizing charts help readers to organize and represent knowledge (Hagemans et al., 2013), which improves their level of recall and comprehension of the reading material (DeLauder and Muilenburg, 2012). During the early stage, research on the promotion effect of signals was mainly conducted in the form of experiments to test the impact of specific signals on paper reading. Most relevant studies suggested that readers could achieve better recall performance and reading comprehension for texts that include signals compared with texts that are devoid of signals (Johnson, 1988;Lorch et al., 1993;Amer, 1994;Lorch and Lorch, 1996). With the advent and increasing popularity of the Internet after the year 2000, an increasing number of scholars have voiced concern about the role of signals in other reading media besides paper, focusing on hypertext and hypermedia learning. Many studies found that proper inclusion of cues in multimedia learning materials can help to improve the academic performance of readers (de Koning et al., 2007(de Koning et al., , 2010Mautone and Mayer, 2007). For example, Jamet (2014) confirmed that the integration of signals is conducive to promoting the integrated processing of graphics and text, and significantly improves the test scores of learners. Colliot and Jamet (2018) also found that the use of outlines as signals promotes the memorization and comprehension of learning materials by college students and enables them to achieve higher scores in both retention and transfer tests. Although many studies have shown that signals cannot improve learning performance (Lowe and Boucheix, 2011;Li et al., 2016), eye movement experiments by Kriz and Hegarty (2007) showed that cues can effectively guide learners to notice task-related information, while not enabling them to achieve higher scores in retention and transfer tests. However, most studies suggested that signals can guide the attention distribution of readers, provide cues that are important for the reading processing, help readers to form a representation of the organizing chart of the article, and ultimately promote both reading comprehension and retention (Lorch et al., 1993;Lorch and Lorch, 1996;Mautone and Mayer, 2001;Ponce and Mayer, 2014). In summary, existing research has shown that on-screen reading is not as effective as paper, especially for expository texts where the purpose is to give information and there is a need for a deeper and more detailed level of processing (Margolin et al., 2018). Researchers suggested that this is because it is difficult to construct effective cognitive maps with on-screen readers, but cognitive map theory is rarely studied empirically. Meanwhile, it was found that the use of signals can promote the efficiency of both paper reading and multimedia learning, and can provide cues that help readers to construct structural representations of the subject of the text. However, in on-screen reading, whether the presented signals have the same effect they have in paper reading still requires further investigation. Few studies have explained how different signals affect the on-screen reading. Therefore, the present study compared reading and navigation performance with on-screen and paper reading to examine the cognitive map theory of on-screen reading. Besides, we assumed that on-screen signals can provide readers with background information and navigation cues, and can thus help readers to construct cognitive maps when reading on-screen text, thus effectively improving navigation and reading comprehension. At the same time, because of the structural integrity and visual intuition of the utilized organizing chart, an organizing chart may be more conducive to the formation of cognitive maps and thus, the improvement of reading performance than the use of bold formatting and underling. To this end, this study designed experiments to specifically investigate the impact of different forms of signaling on navigation and reading comprehension on different media. Furthermore, their internal mechanisms were investigated by the combination of signal and cognitive map theory. This study attempted to answer three research questions: 1. Is on-screen reading not as effective as paper reading for expository text? 2. Does text signaling helps to construct cognitive maps during on-screen reading, thus improving both navigation and reading comprehension? 3. Is verbal signaling (organizing charts) more helpful for the construction of cognitive maps and does it improve reading performance more than physical signaling (bold formatting and underling) for on-screen reading? Participants Seventy-five freshmen students (mean age 19.53 ± 1.39, 52 male, and 23 female) were recruited from two classes majoring in electronic science and technology at Central South University of China. Their majors were consistent, thus avoiding the impact of their professional background on reading comprehension and navigation. All the participants had normal or corrected eyesight and no dyslexia. Before the reading session, participants completed a pre-test questionnaire asking about their demographic information, such as sex, age, Chinese language scores, and their on-screen reading experience and screen using habits. It was found that all the participants had the necessary language reading ability to participate in the experiment (their average score of the Chinese language for the college entrance examination was 112.56, SD = 7.41 and scores ranged from 97 to 130, with a total score of 150 and the passing score is 90). These participants all used or had exposure to electronic screens very often in their daily life and they averaged 2.74 h of text reading on electronic devices per day (SD = 1.50), and, consequently, they were considered to be familiar with on-screen reading. Half of the students were assigned to the paper reading group, and half were assigned to the on-screen reading group with the gender approximately balanced across the groups. See Table 1 for a summary of the participants and pre-testing details. The equivalence of the demographic variables, screen using habits, and on-screen reading experience between groups were tested using a series of one-way ANOVA. No significant difference was found on these pre-test scores of the two groups (p < 0.05). Thus, the groups were equivalent in terms of these variables (e.g., age, Chinese language scores, on-screen reading experience, and screen using habits). After reading, the experimenter checked with the participants if they had read the expository texts or acquired relevant knowledge before. This was not the case for any of them. The study had prior approval by the Ethics Committee of Hunan Normal University in China. We obtained written informed consent from all the participants, and each of them was paid 20 RMB for participating. Materials Before the experiment started, pre-tests were conducted, and three expository texts were selected as reading materials. Ten Chinese technical expository texts were selected from the "Civil Servants Exam 2018: 200 Articles. " The original text and corresponding test questions were partially modified to be more in line with the experimental requirements regarding length, language expression, and question types. Each text contained 1,200-1,300 words and was displayed on two pages. Twenty college students were selected to take pre-tests on these 10 expository texts, each of which was asked to read the 10 texts, complete the corresponding test questions, and select the final three articles with similar test scores and medium difficulty as reading materials. The contents of these three texts involved the "origin of civilization, " "energy and economy, " and "food additives, " respectively. Two experienced professional tutors processed the plain text and added physical signaling and verbal signaling to the three scientific expository texts (see Figure 1). Plain text (also named non-signaling text) refers to text with neither physical signaling nor verbal signaling information. Physical signaling indicates that key concepts and sentences in the text are underlined or formatted in bold. Specifically, we bolded the core concepts and key points of the expository text and underlined the topic sentences and summary sentences (e.g., in the text about "food additives, " the first sentence of the second paragraph "Food additives refer to chemical compounds or natural substances that have been approved by the state to be added to food for anti-corrosion and freshness preservation, improvement of processing technology, etc. " was underlined, and "food additives" was formatted in bold). Verbal signaling indicates that the text was presented together with an organizing chart, which shows key items and their relationships in the text (e.g., in the organizing chart based on the text about "origin of civilization, " the key items include: "Civilization, " "Three International Civilization Standards, " and "Civilization Standards Used in our country, " etc.; Hagemans et al., 2013). Reading Comprehension Test There were eight reading comprehension questions after each text, four judgment questions, and four single choice questions to assess two specific aspects of detailed recall and comprehension inference. Among these questions, the first and second judgment questions and the seventh and eighth single choice question were inference questions, which were used to examine the readers' comprehension and inference ability to understand the overall meaning of the text. A sample item of a judgment question is: "Based on the meaning of the text, nations without cities have not entered the stage of civilization. " The third and fourth judgment questions and the fifth and sixth single choice questions were recall questions, which tested the readers' ability to recall text details. A sample item of a single choice question is: "Why does the UK Treasury provide interest-free loans to some enterprises?" For each question, one correct answer scored one point, one incorrect answer scored zero points, and each text totaled eight points. Navigation Test According to the measurement methods used by Mangen et al. (2019), participants were asked to locate four related contents or concepts in the text, which were placed either in the first half of the first page, the second half of the first page, the first half of the second page, and the second half of the second page (sample item: "Please locate the following contents to their correct place in the text: In which year was the Initial Civilization published?"). For each question, one correct answer scored one point, one incorrect answer scored zero points, and each text totaled four points. Experiment Design and Apparatus This study used a two-way mixed experimental design of 2 (media: paper vs. on-screen) × 3 (signaling: plain text vs. physical signaling vs. verbal signaling) with media as the between-participants variable and signaling as the withinparticipants variable. The paper group read the texts on printed A4 paper, and the on-screen group read texts on a 19-in DELL screen (using Microsoft Word 2010 software). The text content, layout, format, color, and display form both media used for the presentation were identical. The reading and navigation performance of the participants was the dependent variable. The reading performance was measured via reading comprehension scores (the sum of the scores of recall and inference questions), while the navigation performance was measured via navigation test scores. Procedure The experiment was conducted in a usability laboratory. First, participants were introduced to the experiment and finished the consent procedure. Subsequently, they completed a pre-test, that is, a paper-and-pen questionnaire asking about their demographic information and on-screen reading habits and experiences. Then, participants were assigned to the paper reading group and the on-screen reading group and were instructed to read the texts at their normal pace. They were not informed of the exact purpose of the experiment but only that they were going to read three texts in different signaling forms on a computer or on paper and that they would answer some questions afterwards. The experimenter recorded the reading time. After reading, participants first completed the corresponding comprehension tests and navigation tests for each text before continuing to the next text. All participants conducted paperand-pen tests without a time limit, and they were not allowed to look back at the materials when answering the questions. To control for the order error, the order of the three texts in different signaling forms was randomized. The three texts were arranged into six sequences (e.g., abc, acb, bac, bca, cab, and cba). According to the number of participants in the two groups (paper reading group: 38 and on-screen reading group: 37), every six participants were assigned to one of the six sequences, and the remaining one or two participants were randomly assigned to any one of the six sequences. The experiment lasted approximately 10-20 min. Data Analysis Data were analyzed using SPSS software. The data analysis of this study mainly included four parts: (1) descriptive statistics and a one-way ANOVA of reading time; (2) descriptive statistics on the results of total comprehension and navigational performances; (3) two-way repeated measures ANOVA of 2 (media: paper and on-screen) × 3 (signaling form: non-signaling, physical signaling, and verbal signaling) performed via comprehension scores and navigation scores, respectively; and (4) the same repeated measures ANOVA performed on the scores of inference and recall questions, respectively. When the interaction between both independent variables was significant, the simple effect was further analyzed. Reading Performance Descriptive statistics of comprehension scores and navigation scores of paper and on-screen groups under different signaling forms are listed in Table 2. Comprehension Score Repeated measures ANOVA of comprehension scores showed that the main effect of reading media was significant, F(1,73) = 4.462, p < 0.05, and partial η 2 = 0.058, the main effect of signaling form was significant, F(2,72) = 65.742, p < 0.01, and partial η 2 = 0.474, and the interaction between signaling form and media was marginally significant, F(2,72) = 3.048, p = 0.050, and partial η 2 = 0.040. Further simple effect analysis (see Figure 2) showed that, in a comparison of different reading media, under the non-signaling condition, the comprehension score of the paper reading group was significantly higher than that of the on-screen reading group, F(1,73) = 7.117, p < 0.01, and partial η 2 = 0.089, while under the physical signaling [F(1,73) = 0.045, p > 0.05, and partial η 2 = 0.001] and verbal signaling [F(1,73) = 1.125, p > 0.05, FIGURE 1 | Three forms of signaling. The left-hand page shows plain text (text without any signals). The middle page shows physical signaling (text with underlining and bold formatting). The right-hand page shows verbal signaling (an organizing chart of the text which was presented at the end of the text). and partial η 2 = 0.015], no significant difference was found between comprehension scores of the paper reading group and the on-screen reading group. Simple effect comparison of different signaling forms showed that under the condition of paper reading, participants' comprehension scores differed significantly under different signaling forms [F(2,72) = 18.553, p < 0.01, and partial η 2 = 0.340]. Specifically, no significant difference was found between comprehension scores of physical and verbal signals, but both scores were significantly higher than that of non-signaling. Under the condition of on-screen reading, participants' comprehension scores were also significantly different under different signaling forms [F(2,72) = 43.487, p < 0.01, and partial η 2 = 0.547]. Specifically, the comprehension score of physical signaling was significantly higher than that of verbal signaling and non-signaling, and the score of verbal signaling was significantly higher than that of non-signaling. Navigation Score Repeated measures ANOVA of navigation scores showed that the main effect of the signaling form was significant [F(2,72) = 29.238, p < 0.01, and partial η 2 = 0.448]. Further pairwise comparison showed that participants' navigation scores under physical signaling were significantly higher than under the other two signaling forms, and the navigation score under verbal signaling was significantly higher than that under non-signaling. The main effect of reading media was significant [F(1,73) = 4.388, p < 0.05, and partial η 2 = 0.057]. Specifically, under the non-signaling condition, the navigation score of the paper reading group was significantly higher than that of the on-screen reading group [F(1,73) = 5.166, p < 0.05, and partial η 2 = 0.066], while in physical signaling [F(1,73) = 0.167, p > 0.05, and partial η 2 = 0.002] and verbal signaling [F(1,73) = 1.207, p > 0.05, and partial η 2 = 0.016], no significant difference was found between both groups. There was no significant interaction effect [F(2,72) = 1.051, p > 0.05, and partial η 2 = 0.028; see Figure 3]. Inference Score To further clarify the underlying reasons for the effects of physical and verbal signals on comprehension scores in different media, this study analyzed the differences in the comprehension scores of inference and recall questions. Two-way repeated measures ANOVA of the inference score showed that the main effect of the signaling form was significant [F(2,72) = 36.918, p < 0.01, and partial η 2 = 0.336]. Further pairwise comparison showed no significant difference in comprehension scores between physical and verbal signals, but both scores were significantly higher than under non-signaling. Concerning the reading media, although the inference score of the paper reading group was slightly higher than that of the on-screen reading group, there was no significant difference between the two groups [F(1,73) = 2.476, p > 0.05, and partial η 2 = 0.033]. There was no significant interaction effect [F(2,72) = 1.387, p > 0.05, and partial η 2 = 0.019; see Figure 4]. Recall Score Two-way repeated measures ANOVA of the recall score showed that the main effect of the signaling form was significant [F(2,72) = 27.474, p < 0.01, and partial η 2 = 0.273], while FIGURE 2 | Interaction analysis between reading media and signaling forms for comprehension scores. the main effect of the reading media was not significant [F(1,73) = 2.529, p > 0.05, and partial η 2 = 0.033]. The interaction between the signaling form and the media was marginally significant [F(2,72) = 2.800, p = 0.064, and partial η 2 = 0.037; see Figure 5]. A further simple effect analysis showed that in the comparison of different reading media, under the non-signaling condition, the recall score of the paper reading group was significantly higher than that of the on-screen reading group, [F(1,73) = 4.459, p < 0.05, and partial η 2 = 0.058], while both under the physical signaling [F(1,73) = 0.045, p > 0.05, and partial η 2 = 0.001] and verbal signaling [F(1,73) = 1.125, p > 0.05, and partial η 2 = 0.015], no significant differences were found between recall scores of the paper reading group and the on-screen reading group. Simple effect analysis for the comparison of different signaling forms showed that under the condition of paper reading, participants' recall scores were significantly different under different signaling forms [F(2,72) = 7.129, p < 0.01, and partial η 2 = 0.165]. Specifically, there was no significant difference between recall scores of physical and verbal signals, but both scores were significantly higher than that of non-signaling. Under the condition of on-screen reading, participants' recall scores were also significantly different under different signaling forms [F(2,72) = 24.421, p < 0.01, and partial η 2 = 0.404]. Specifically, the recall score of physical signaling was significantly higher than that of verbal signaling and non-signaling, and the recall score of verbal signaling was significantly higher than that of non-signaling. The results of the inference scores were different from the total comprehension scores, while the results of recall scores were similar to the total comprehension scores. This implies that the different reading performances for different media affected by physical signaling and verbal signaling were mainly a result of recall questions. DISCUSSION This study compared the effects of physical signaling and verbal signaling on reading comprehension and navigation when reading expository texts either on-screen or as printed text. An experiment was conducted to answer three research questions. The results showed that the reading comprehension and navigation scores in the case of signaling were significantly higher than those of non-signaling, indicating that signals help to construct cognitive maps during reading, which showed a signaling promotion effect. Moreover, comparing the promotion effect for reading performance between different media of physical and verbal signals showed that for reading on paper, no significant difference was found in the comprehension scores under both forms of signaling; for reading on screens, the comprehension score of physical signaling was significantly higher than that of verbal signaling. This shows that physical signaling can promote on-screen reading more effectively than verbal signaling. The following presents further analysis and discussion. Regarding question 1, this study showed that during reading non-signaling texts, the comprehension and navigation scores of the paper reading group were significantly higher than those of the on-screen reading group. Therefore, question 1 could be answered, that is, on-screen reading is not as effective as paper reading when reading technical expository text, which is consistent with existing research results (Singer and Alexander, 2017;Clinton, 2019). According to the cognitive maps theory (e.g., Li et al., 2013;Hou et al., 2017a,b), compared with on-screen reading, text presentation on paper is more conducive to the construction of a mental map by the reader, thus, they can achieve better comprehension and a high degree of immersion, and will not easily fatigue. Printed texts present readers with fixed typography, chapter information, page numbers, corner frames, and blank spaces. During the process of flipping through a text on paper, rich kinesthetic feedback, such as visual perception and tactility, is also presented, and readers unconsciously know the physical location of specific information within a text and its spatial relationship to their location in FIGURE 4 | Interaction analysis between reading media and signaling forms for inference scores. the text as a whole. This ability to locate information is important for comprehension and recall because, when readers search for an object in their memory, they often locate it by recalling relevant background information cues (Chun and Jiang, 1998). In contrast, on-screen readers can only progress visual information (e.g., progress bars), and the lack of contextual information cues makes it difficult for readers to identify the location of specific information in the text. Moreover, scrolling may prevent readers from forming a coherent psychological representation. It is difficult for readers to remember the spatial location of a specific text section, since it changes position as the reader scrolls down. Based on this, this study suggests that cognitive maps may play a crucial role in on-screen reading. The lack of sufficient background information and effective navigational cues in the presentation form of text on a screen hinders the construction of cognitive maps, thus leading to low reading performance. Concerning question 2, in this study, for both paper and on-screen media, the comprehension and navigation scores under physical and verbal signals were significantly higher than those without signals, showing a signaling promotion effect. Paper reading has already been verified by many previous studies, and this study focused on how signals affect the construction of cognitive maps to achieve a promoting effect on on-screen reading. According to the existing research, this can be analyzed from two aspects of background information and navigation cues. First, concerning the impact of signals on background information, rich background information not only helps the brain to process and encode textual content but also facilitates identification of the location and extraction of specific information (Chun and Jiang, 1998;Morineau et al., 2005). Although the background information during the reading process is not directly related to the content read, it provides cues about the structure of the text, so that a mental map with rich information about the entire text can be formed in the brain. Reading each page is akin to leaving a footprint on a map, which unknowingly equips readers with a clear spatial perception of what is being read. However, while on-screen reading, because of its constant presentation forms and indistinguishable external status, readers often find it difficult to localize a given part of the information within a text. At such a passage, the text is underlined, formatted in bold, and equipped with a marker of the organizing chart, which greatly enriches the background information, the text provides and helps to establish a "landmark" for the reader. During the reading process, the reader's comparison and synthesis between the mark and the corresponding content, and each mark, form a relationship route. Based on this, a comprehensive psychological representation can be built that guides the comprehension and extraction of textual content. Second, judging from the influence of signals on navigational cues, the text form presented on a screen lacks effective navigational cues, which is not conducive to the recall and review of textual content (e.g., Li et al., 2013;Hou et al., 2017b;Singer and Alexander, 2017). The cognitive map theory suggests that the reason why individuals can successfully navigate via spatial positioning is that they can use a map in their memory as a representation of space during navigation (i.e., to confirm the distance and direction between locations and flexibly plan routes; Weisberg and Newcombe, 2018). During the reading process, to complete the understanding, induction, and absorption of knowledge, readers will also locate information and switch between different areas within the text (Wästlund et al., 2008), i.e., will apply navigation of reading. During the learning phase, good navigation is helpful to construct cognitive maps, and promotes reader comprehension and text recall. During the information extraction phase, if the spatial location information of the target content is saved within cognitive maps, and if the connection route is smooth, such information can be quickly and accurately located. Otherwise, readers can only navigate via linear search, which not only consumes more cognitive resources but also greatly decreases performance. In the non-signaling condition of this study, comprehension and navigation scores of the paper reading group were significantly higher than those of the on-screen reading group. Moreover, the comprehension and navigation scores of the on-screen reading group were significantly improved after signaling, and were basically identical to those of the paper reading group. This indicated that signaling effectively compensates for the lack of navigational cues in on-screen texts, so that readers can also FIGURE 5 | Interaction analysis between reading media and signaling forms for recall scores. Frontiers in Psychology | www.frontiersin.org 9 September 2020 | Volume 11 | Article 571957 use cognitive maps to navigate when reading text on a screen, thus improving comprehension and navigational performance. Regarding question 3, the results obtained by this study differ from the expectations, and also from the results of previous studies. For example, Su (2018) and others compared the impact of different signals on the paper reading performance of junior high school students, and found that an organizing chart can improve their performance better than text formatting via highlighting and underlining. Interestingly, this study found that when reading expository text on a screen, physical signaling exerts a stronger promotion effect on navigation and comprehension performance than verbal signaling. This difference is mainly derived from detailed recall questions rather than comprehension inference questions. Specifically, when readers are required to understand and grasp the main content of the text and reason with the central idea, the two signaling forms achieved the same promotion effect. When readers had to accurately recall detailed information and concepts, the on-screen reading score under physical signaling was significantly higher than that of verbal signaling. The reason may be related to different signals that act on different phases of the construction of cognitive maps. For example, physical signaling acts on the first and second phases of cognitive map construction. On the one hand, physical signaling is directly embedded in the text by use of underlining, bold formatting, etc., which both enriches and clarifies the background information of the text. During reading, the words, phrases, or sentences with signals are then treated as landmarks. Landmark knowledge orients readers to navigate in an on-screen reading environment by visually highlighting the crucial information of the text. On the other hand, the highlighted landmark information per se has a specific logical relationship, which in combination with a landmark location, provides readers with interconnected navigational cues for the construction of the final situation model. This promotes the formation of routine knowledge during the second phase. Verbal signaling (i.e., the presentation of an organizing chart, showing key items and their hierarchical relationships) acts on the formation of routine knowledge during the second phase. Previous studies mostly used reading software to embed organizing charts or navigational maps in reading materials, e.g., by presenting them on the side of a text page as a visual toolbar (Li et al., 2013;Sullivan and Puntambekar, 2015). Readers can automatically jump to the corresponding text content by clicking on a particular concept in the toolbar. This form of visual reading in the document leads to a closer connection between signals and information locations, and readers can use the organizing chart for real-time navigation and thus gradually improve the cognitive map in their memory. However, the organizing chart in this study is presented after the entire text, and thus, the signals and text are relatively independent and are not technically connected. While reading, readers may find it difficult to individually link the concepts of the topic in the organizing chart to the content of the text. This leads to the inability of the reader to identify the location of a headline corresponding to the original text and provide an answer based on the context when answering detailed questions. This study only used behavioral experiments (a quantitative approach) to verify cognitive maps. It is possible to obtain more comprehensive findings by using a mixed methodological approach, for example, adding targeted interviews or openended survey questions. Future research should use a combination of quantitative and qualitative research methods. Besides, Given that eye-movement research can "directly" observe people's cognitive processing during reading through eye movement indicators (Rayner, 1978(Rayner, , 1998, future studies should collect eye-movement information during on-screen reading to verification of the process of cognitive map construction, and investigate the impact of different forms of signaling on cognitive map construction during different phases by observing readers' eye movement trajectories between different signaling contents to further explore the impact and functional mechanism of cognitive maps on text processing in the human brain. The participants in this study were young college students majoring in engineering, who had high computer competency and familiarity with on-screen reading. Future research should also investigate a more diverse population to replicate the results, for example, age and major background may influence people's reading comprehension. Moreover, the organizing chart used in this study is presented independently at the end of the text. If it were embedded in the text, readers could navigate the text in real-time, which would enable explorations of whether its promotion effect on on-screen reading will be equivalent to or even surpass that of underlining and bold formatting. At last, the reading materials in this study were short expository texts of about 1,200 words, which made it comparatively easier for readers to grasp the topic and structure of the text compared with longer texts. This may also be the reason why the promotion effect of verbal signaling in this research was not as pronounced as that of physical signaling. Future research should use the length of the reading material as a manipulatable variable to further unveil the signaling strategies suitable for on-screen reading, which will play an important role in the widespread promotion of on-screen reading in the future. CONCLUSION This study showed that the use of signals can provide background information and navigational cues for on-screen reading, promote the construction of readers' cognitive maps, and effectively improve their on-screen reading performance. Specifically, the following three results were found: 1. Reading expository text on computer screens was not as effective as reading these on paper. 2. Whether the text was presented on paper or screen, physical, and verbal signals of texts could help readers to navigate, construct cognitive maps, and improve their reading performance. 3. For on-screen reading, physical signaling exerted a stronger promoting effect than verbal signaling, and this difference was mainly derived from detailed recall questions rather than comprehension inference questions. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Ethics Committee of Hunan Normal University in China. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS ZS and TT conceived and designed the experiments. LY and TT performed the experiments. TT analyzed the data. ZS and TT wrote the manuscript. All authors contributed to the article and approved the submitted version.
9,312.8
2020-09-30T00:00:00.000
[ "Education", "Psychology", "Computer Science" ]
Analytical equation for outflow along the flow in a perforated fluid distribution pipe Perforated fluid distribution pipes have been widely used in agriculture, water supply and drainage, ventilation, the chemical industry, and other sectors. The momentum equation for variable mass flow with a variable exchange coefficient and variable friction coefficient was developed by using the momentum conservation method under the condition of a certain slope. The change laws of the variable momentum exchange coefficient and the variable resistance coefficient along the flow were analyzed, and the function of the momentum exchange coefficient was given. According to the velocity distribution of the power function, the momentum equation of variable mass flow was solved for different Reynolds numbers. The analytical solution contains components of pressure, gravity, friction and momentum and reflects the influence of various factors on the pressure distribution along the perforated pipe. The calculated results of the analytical solution were compared with the experimental values of the study by Jin et al. 1984 and Wang et al. 2001 with the mean errors 8.2%, 3.8% and 2.7%, and showed that the analytical solution of the variable mass momentum equation was qualitatively and quantitatively consistent with the experimental results. Introduction The perforated fluid distribution pipe is a typical type of dispensing equipment that can ensure that the main stream flows uniformly from the sidewall keyhole along the axial channel. Perforated pipes are widely applied in agriculture, the chemical industry, water supply and drainage, ventilation and other fields. In practical projects, the outlet of the lateral pipe might be a pipeline, spray nozzle or microspores. Because the total flow consists of separated multi-flows, the flow in the perforated pipe is also referred to as embranchment flow, in which the discharge, head loss and pressure distribution of the perforated pipes differ from those of non-perforated pipes. The flow characteristics of the perforated pipe are highly important for the pipeline design of sprinklers and drip irrigation projects and for applications in the chemical, dynamic, ventilation and environmental fields [1][2][3][4][5][6][7][8]. Flow distribution in a perforated fluid distribution pipe has been studied by a number of authors using the energy equation method [3,5,6,[9][10][11][12][13][14][15][16][17][18][19]. Acrivos et al used one-dimensional a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 flow equations to calculate the flow division in manifolds, and their results were applicable to a wide variety of combinations of channel dimensions, fluid velocities, physical properties, and pressure drops across the side ports [9]. Wu and Gitlin calculated the energy drop between outlets and the pressure distribution along a drip irrigation line with only 1% error by considering smooth pipes and using the Blasins equation for the friction coefficient [11].Warrick and Yitayew presented an alternative treatment that included a spatially variable discharge function as a component of the basic solution in lateral trickle systems [12,13].Following the derivation given by Shen developed an analytical solution to evaluate the effect of friction on flow distribution in both dividing and combining flow manifolds [14]. Scaloppi and Allen applied a differential approach to multiple outlet pipes with constant and continuously variable outflows and simulated the pressure distributions along uniform sprinkle systems, trickle irrigation laterals, manifolds, and gated pipes that considered the effect of ground slope and velocity head on the pipeline hydraulics [15]. Hathoot et al investigated the problem of a lateral pipe with equally spaced emitters and a uniform slope and estimated the head loss between emitters using the Darcy-Weisbach formula with variation in the Reynolds number, different zones on the Moody diagram, and a friction coefficient formula corresponding to each zone [16,17]. Jain et al developed a method for evaluating the lateral hydraulics using a lateral discharge equation approach and used a power equation to calculate the relationship between the inlet flow rate and inlet pressure head of the lateral [3]. Clemo developed a model of pressure losses in perforated pipes including the influence of inflow through the pipe walls and compared favorably with three experiments results [20]. A series of steady-state experiments were presented to study the stage discharge relationship for a porous pipe buried under loose laid aggregate [21][22][23]. Afrin et al studied the hydraulics of groundwater flow and porous pipe underdrains using a three-dimensional CFD model, and computed the discharge coefficient for the perforated pipe [24]. Maynes et al investigated the loss coefficient and onset of cavitation caused by water flow through perforated plates by an experiment [25]. Because the momentum conservation method can neglect the details of the flow and channel structure, the hole form and other factors contain a momentum exchange coefficient. This method was used to analyze the flow mechanism of a perforated fluid distribution pipe and formed the theoretical basis for the uniform fluid distribution [1,2,4,7,8,[26][27][28][29]. Bassiouny and Martin analyzed mass and momentum balances for a flow element in both the intake and exhaust conduits based on one-dimensional flow equations [1,2]. By introducing a momentum equation, Jin et al studied a design method for determining the major parameters of branched pipe distributors in gas-solid fluidized beds for uniform gas distribution [4]. Kang and Nishiyama developed a lateral discharge equation to express the relationship between the discharge and pressure head at the inlet of a lateral using the finite element method [25,26]. Wang et al analyzed the friction and pressure recovery in porous pipe manifold coefficients and obtained an analytical solution of the momentum equation with varying mass and varying coefficients [7]. Wang et al introduced a general theoretical model to calculate the flow distribution and pressure drop in a channel with porous walls [8]. Yildirim and Agˇiraliogˇlu compared seven hydraulic methods to calculate the flow characteristics of a perforated pipe using micro-irrigation with special limited design conditions [29]. The above studies on the momentum conservation method are primarily based on the experimental results that were used to measure the friction coefficient and momentum exchange coefficient, and the solution of the theoretical model must be constrained to the condition of the constant coefficient. However, the flow in a perforated pipe occurs primarily under the condition of variable coefficients in practical projects. Therefore, an analytical solution for the variable coefficient is the key problem for flow distribution in a perforated fluid distribution pipe [8]. Another problem with the momentum method is that it neglects the influence of the slope on the flow in the process of establishing a variable mass momentum equation (considered only as a horizontal slope).According to mechanics, the momentum method considers only the influence of hydrodynamic pressure, friction and momentum changes on the flow but neglects the effect of gravity. For most perforated pipes used in micro-irrigation, the slope is not horizontal, and gravity might have a significant influence on the flow. Therefore, the gravity component should be introduced into the momentum equation of the variable mass to consider the influence of the slope. The objectives of this study are to develop the momentum equation of variable mass flow using the momentum conservation method under the condition of a certain slope and to solve the momentum equation under different Reynolds numbers based on the velocity distribution of the power function. The analytical solution is suitable for flow in a perforated fluid distribution pipe, and for a perforated pipe with a small ratio between the length and diameter and flow is influenced by momentum and friction. Momentum equation of variable mass for outflow along the flow The mass decreases along the flow in the porous tube flow or embranchment flow; thus, it is characterized as variable mass flow, and the pressure distribution is affected by momentum exchange, head loss and slope. The velocity of the flow decreases with changes in mass because of the holes in the sidewall; thus, the momentum changes along the flow, and a certain amount of kinetic energy is converted into the pressure head, which causes the pressure to increase along the flow. The friction produces pressure losses that cause the pressure to decrease, and at the same time, the flow might produce rough waves at the outlet of the hole, which can increase the energy losses. The head change induced by the slope of the pipe has a significant effect on the pressure distribution [7]. Therefore, it is necessary to simultaneously consider these influence factors if the pressure change is maintained in a certain range. Fig 1 shows a schematic of the pipeline with multiple outlets. In Fig 1, the perforated tube is closed at the end of one side and the cross-sectional area is the same along the flow, which has one row of radial holes. The following assumptions are applied: (i) the flow is one-dimensional; (ii) the fluid is incompressible; (iii) the velocity is 0 at the closed end; (iv) the environmental pressure is constant, and the flow through the holes in the sidewall is free flow; (v) the holes are vertical to the axis, and the distance and size of the holes are all uniform; (vi) the size of the cross-section is constant, and the slope is uniform along the flow; and (vii) based on these assumptions, the discharge of the holes depends on the pressure; i.e., the distribution of the discharge of the porous tube pipes depends on the pressure distribution. For a flow of variable mass in the perforated pipe, we select a micro-control volume, as shown in Fig 2. We select the micro-interval dx along the x-axis (dx can contain several distances of holes), and all holes in an infinitesimal section have a uniform velocity, namely, the velocity of point x. Under this condition, we can establish the equation of continuity and momentum for the infinitesimal section, and the momentum equation of the variable mass is generally expressed as follows [30]: where p-hydrodynamic pressure, N.m -2 ;ρ-density of the fluid, kg.m -3 ;V-velocity of the flow, m.s -1 ;D-tube diameter, m;λ-friction coefficient along the flow; k-momentum exchange coefficient; g-acceleration of gravity, m.s -2 ; and I-the slope of the tube. The rationality and convenience of the momentum equation means that it can neglect the details of the flow and modify the effect of the diversion and vortex via the momentum exchange coefficient. Therefore, the momentum equation of the variable mass, which is the basis of the perforated pipe problem, is the theoretical model for practical engineering problems and can be used to calculate and analyze the flow mechanism and variation law of the perforated pipe. Because the momentum exchange coefficient k and the friction coefficient λ are functions of the velocity, Eq (1) contains a serious nonlinearity. The nonlinear equation displays three difficulties: (i) the variation law of the friction coefficient λ with the Reynolds number and system structure and the difference from the law of λ in the smooth pipe; (ii) the variation law of the momentum exchange coefficient k with the momentum and velocity of the fluid; and (iii) the solution of the nonlinear equation. The critical problem in solution of the nonlinear differential equation lies in how to analyze and solve the equation while considering the peculiar properties of the embranchment flow. Based on previous results, this article investigates the variation law of the momentum exchange coefficient k and the friction coefficient λ by applying the theoretical and experimental methods simultaneously and subsequently solving the momentum equation of the variable mass. Velocity distribution in the perforated pipe The flow in the perforated pipe is a variable discharge flow. Generally, the discharge varies among different holes; thus, the variation of the discharge is a complex function of the x-axis along the flow. The tube diameter D is a constant in the equal section tube, and the velocity is proportional to the discharge. Therefore, the velocity distribution can be assumed to be a power function [29]: where Q x -discharge through the x-section, m 3 .s -1 ;Q 0 -total discharge (entrance discharge), m 3 .s -1 ;L-length of the perforated pipe, m;V 0 -velocity at the entrance, m.s -1 ;A 0 -size of the cross-section,A 0 = 1/4πD 2 , m 2 ; X = x/L; and z-an exponent. Three types of velocity distributions occur with variation of the exponent z, as shown in Fig 3. 3. z>1; This situation corresponds to friction-controlled flow. In this condition, the friction is greater than the momentum force, and the velocity decreases gradually along the flow. In equal-section porous pipes, the exponent in Eq (2) should be related to the opening ratio η (the area ratio of holes to the entire sidewall), the length-diameter ratio E (the length to the tube diameter), and the slope I. The following assumptions can be made: the relative deviation of the discharge from different holes becomes smaller, and the exponent z might reach approximately 1 and vice versa. When x = 0.5L, Q 0.5 = Q 0 Á0.5 z is derived according to Eq (2), and the calculation formula for the exponent z is expressed as follows: where Q 0.5 is the discharge of the L/2-section and is equal to the algebraic sum of the discharge of the holes from x = 0.5Lto x = L, namely: where j-serial number of the hole (arrayed in order from the tube entrance to the end);q jdischarge of hole number j; and N-total number of holes in the perforated pipe. According to the experimental discharge data and Eqs (3) and (4), the exponent z can be calculated, and the effect of all dimensionless numbers on the exponent z can be analyzed. According to Eq (3), if Q 0.5 = 0.5Q 0 , the algebraic sum of the discharge of the former half of the pipe is equal to the latter, and z = 1, which indicates that the discharge of the holes is uniform in the former and latter. Strictly speaking, the discharge is different, but the relative deviation is smaller under the condition ofQ 0.5 = 0.5Q 0 ; thus, the equivalent outflow can be approximated. IfQ 0.5 <0.5Q 0 , the discharge in the former half is more than that in the latter half, and z>1, which indicates that the outflow discharge is gradually reduced. IfQ 0.5 >0.5Q 0 , the former is less than the latter, and z<1, which indicates that the outflow discharge increases gradually. When the effects of the slope I, length-diameter ratio E and opening ratio η on the exponent z are mutually independent, the formula for the exponent z containing these dimensionless numbers can be expressed as follows [30]: Where E = L/D is the ratio between length and diameter; and η = A 1 /A 2 = (Nd 2 )/(4DL)is the opening ratio, which is the ratio of the total perforated area A 1 = NÁ1/4πd 2 to the entire internal face A 2 = πDL. First, if the opening ratio η and length-diameter ratio E are constant, the model fits the relationship of I~z. If the opening ratio is constant, the model fits the relationship of E~z, according to the I~z relationship. Finally, the model fits the relationship of η~z, considering the comprehensive influence of I and E, and the empirical formula of the exponent z can be expressed as follows: where the range of I is -0.001-0.009, E is 286-2000,η 0 = d 0 2 /(2D 0 S 0 ), and d 0 , D 0 and S 0 are, respectively, the diameter of the hole, the diameter of the tube and the hole distance. In this article, d 0 = 0.0012m,D 0 = 0.035m,S 0 = 0.30m,η 0 = 6.8571×10 −5 , and the range of η/η 0 is 0.0625-2. The exponent z can be calculated in different conditions according to Eq (6). Eq (6) indicates that the exponent z decreases with I but increases with E and η. In actual engineering projects, the exponent z should be determined according to the comprehensive influence of I, E and η, which indicates that to ensure that the exponent z reaches 1, the good best approach is to adjust I, E and η. Variation of the resistance coefficient λ For the perforated pipe, the friction coefficient is related to the sidewall roughness and the tube structure in addition to the Reynolds number. In actual engineering projects, the Reynolds number decreases with the discharge along the flow (maximum at the entrance, 0 at the end). Therefore, the friction coefficient increases along the flow. If the flow pattern is smooth turbulent flow (Re<10 5 ) at the entrance, the pattern at the end is laminar flow. In fact, laminar flow always exists at the end of the perforated pipe. In the solution process, if the formula of the friction coefficientλ 0 is uniform along the entire pipe, then the calculated value of λ at the end might be less than the actual value, and the deviation will be highest at the closed end, which can affect the pressure distribution at the end, especially at a closed end. As such, χ is introduced to reflect this effect in the solution process of the variable mass momentum equation, i.e., λ = χλ 0 , where λ 0 is the friction coefficient in the smooth pipe, and χ is the modified coefficient. IfRe 0 <2000, the flow pattern of the entire pipe is laminar flow, the friction coefficient does not need to be modified, and χ = 1.0. IfRe 0 >2000, the section from the entrance to the approximate Re = 2000 point is turbulent flow and the other is laminar flow. The range of the laminar flow is related toRe 0 (Reynolds number at the entrance) and the length-diameter E. In this condition, χ>1.0, and the range is 1.1-1.5. As a result, χ = 1.3 is always used to simplify the calculation process. Different formulas of the friction coefficient exist for different values of Re. Thus, in the solution process, judgment of the flow pattern (Re) occurs first, and the proper formula is selected according to the value of Re. Momentum exchange coefficient k The derivation of the variable mass momentum equation indicates that the momentum exchange coefficient is a modification of the momentum component, which is contained by the outflow. The velocity deviation between the entrance and the outlet of the hole is caused by the velocity component, which is contained by the outflow; i.e., the relative deviation ratio of the mainstream momentumΔV 2 /V 2 is equal to the ratio of the momentum component contained by the outflow to the total mainstream momentum. Because the momentum change is related to the vortex, friction, and momentum components contained by the outflow, the modified coefficient β should be used, namely, β(ΔV 2 )/V 2 . If the momentum variation ratio of the first hole is α, the friction coefficient k can be expressed as follows [31]: When the velocity distribution of the mainstream is given, we can take the derivative of the relative momentum deviation and perform integration of this derivative to derive the functional expression of the relative momentum deviation, and finally, the general formula can be expressed as follows: The coefficients of α and β are constant. At the entrance, x = 0,V = V 0 , and k = α. At the closed end, x = L,ΔV 2 /V 2 = -1, and k = 0.5. According to ΔV 2 /V 2 = -1and Eq (7), k = α−β can be derived, which is the momentum exchange coefficient of the last hole, and (α−β) applies at the closed end. According to previous results [14,28], β≌0.15andα≌0.65, and these values are used in this article. This analysis indicates that the coefficients α and β have definite physical meanings based on the law of conservation of momentum, which is an advanced point in this article. Combining Eqs (2) and (8), the formula of k can be expressed as follows: Solution of the momentum equation The process of establishment and derivation of the theoretical model includes the two variable parameters λ and k in Eq (1). The functions of λ and k have been given, and the momentum equation can be solved. By combining Eqs (8) and (1), Eq (10) can be derived: Combining Eqs (10) and (2), Eq (11) can be derived: In actual engineering projects, the friction coefficient λ is variable, which affects the pressure distribution. However, λ 0 is a function of the Reynolds number, and different formulas apply in different Reynolds number ranges. To derive the analytical solution under different conditions, we should investigate different ranges of Reynolds numbers. Re<2000. According to λ = 64χ/Re = 64νχ/(DV) = 64νχ/(DV 0 (1-x/L) z )and Eq (11), Integrating Eq (12) from 0 to x and using the dimensionless method, we obtain Or, Where E u = (p x -p 0 ) /ρV 0 2 ,X = x/L,E = L/D, andRe 0 = V 0 D/ν. If X = 1, Eu contains the infinitive of the "0Á1" format, and in this condition, the limit value of Eu is equal to the value of Eu, namely, 2000 Re<10 5 . According to λ = 0.3164χ/Re 0.25 = 0.3164χ/(VD/ν) 0. 25 and Eq (11), 1 r dp dx þ 0:1582wn 0:25 Integrating the Eq (15) from 0 to x and using the dimensionless method, we obtain And, Integrating Eq (15) from 0 to x and using the dimensionless method, we obtain And, The analytical solutions of the variable mass momentum equation are obtained in different ranges of Reynolds numbers according to the theory of power rate distribution, similar to Eqs (14a), (16a) and (18a). These equations contain the pipe slope I, the length-diameter ratio E and flow parameterRe 0 . At the same time, three dimensionless parameters (E u = (p x -p 0 ) /ρV 0 2 , Fr = V 0 /(gL) 0.5 , andRe 0 = V 0 D/ν) are contained in these equations, which indicate that the major forces for porous flow are hydrodynamic pressure, gravity and friction. These three forces are interrelated, and all impact the porous flow. The three equations all contain the Eu number on the left-hand side of the equations, and Eu is the pressure item that reflects the variation of the hydrodynamic pressure. The right side of the three equations contains four items: gravity, friction and two momentum items. The first is the gravity item that reflects the influence of the slope I on Eu. The second item is the friction item that reflects the influence of the friction coefficient λ on Eu. The third and fourth items are both momentum terms that reflect the influence of momentum variation on Eu. In the three equations, the other items are the same, except for the second item, which instead reflects the influence of the Reynolds number on the drag coefficient λ. If the slope I and length-diameter ratio E are constant, the effect of the Reynolds number on the pressure distribution of the porous pipe can be derived. In the process of engineering design, the length-diameter ratio can be derived if the discharge and slope are given. For built projects, the three equations are used to assess the working condition of the perforated pipes and the rationality of the projects. According to analysis of the analytical solutions for the momentum equation, the influence of the gravity, friction and momentum items on the Eu (hydrodynamic item) is unanimous from the point of view of quality in Eqs (14a), (16a) and (18a). Generally, we always consider the condition 2000 Re<10 5 ; thus, the effect of many factors on the pressure distribution of the porous pipe is analyzed based on Eq (16a). The right side of Eq (16a) contains four items. The first is the gravity item that changes with the value of X. If X = 0, this term is equal to 0, and if X = 1, it is equal to gIL/V 0 2 , which indicates that the effect of gravity on the pressure distribution is linear and that the gradient is related to the slope and length of the porous pipe. The second item is the friction item, which also changes with the value of X. According to the friction item, the effect of friction increases and the hydrodynamic pressure difference decreases as the length-diameter E increases or the ReynoldsRe 0 decreases, and vice versa. The third and fourth items are momentum items that change with the value of X. If X = 0, the result of third and fourth items is equal to α, and when X = 1, it is equal to (α−β). The variation indicates that the momentum exchange is variable, with a maximum at the entrance and a minimum at the closed end. Using the same type with the same reasoning, the same conclusions can be derived under the conditions of Re<2000 and 10 5 Re<10 7 . In the process of engineering design, if the slope is given, we can adjust the length-diameter and the Reynolds number to reflect the influence of the gravity, momentum and friction balance, and in this condition, the hydrodynamic pressure distribution tends toward a uniform distribution, and the discharges of the various holes are the same. Influence of slope on the pressure distribution We verify the validity and accuracy of the analytical solution for the variable mass momentum equation according to the experimental results from the micro-pressure porous pipe. Generally, we always consider the condition of 2000 Re<10 5 and calculate the analytical solution based on Eq (16). To simplify the calculation process, α = 0.65 and β = 0.15 are selected. For the exponent z, we first select z = 1 and verify according to Eq (6) and the actual parameters. In the analytical solution of the variable mass momentum equation, the gravity item reflects the influence of the slope I on the value of Eu. For the condition in which the diameter of the pipe D = 0.04775m, the length of the pipe L = 50m, the interval of the holes S = 0.15m, the diameter of the holes d = 0.0012m, and the water head at the entrance H = 0.5m, the curve of I~Eu has been calculated and is presented in Fig 4. In Fig 4, the length-diameter E = 1047 and the range of the flow parameterRe 0 lies in the range of 20538-28049. According to the calculated results, in this condition, the influence of the friction item is larger than that of the momentum item because of the minor velocity. Thus, the perforated pipe flow is friction-controlled flow in different slopes, and the pressure distribution is controlled by gravity and friction. According to Fig 4, ifI 0, the hydrodynamic pressure decreases along the flow because of the double effect of gravity and friction and reaches a maximum at the entrance and a minimum at the closed end. If I>0, the gravity item is positive, which is opposite to the friction. If I = 0.003, the force of gravity is counteracted by the friction, and in this condition, the hydrodynamic pressure tends to be uniform along the flow. With an increase in slope I, the force of gravity exceeds the friction, the sign of Eu changes from negative to positive; thus, the maximum of the hydrodynamic pressure occurs at the closed end. Influence of length-diameter on the pressure distribution If the diameter of the pipe D = 0.035m, the slope I = 0.001, the interval of the holes S = 0.3m, the diameter of the holes d = 0.0012m, and the water head at the entrance H = 0.5m, the calculated results of Eq (16) with different values of E are shown in Fig 5. From Fig 5, we observe that if E!571, the pressure distribution curve has a downward concave shape with the minimum value at the point X. Before this point, the force of the friction item for Eu is greater than the gravity and momentum items; thus, both Eu and the hydrodynamic pressure decrease along the flow. At the point, the influences of gravity, momentum and friction are balanced, and the hydrodynamic pressure reaches a minimum. After the point, the force of the friction item for Eu is less than that of the gravity and momentum items; thus, Eu increases along the flow, and the hydrodynamic pressure increases along the flow. If E = 700, the influences are balanced. In this condition, the hydrodynamic pressure is uniform along the flow. If E = 286,Eu increases linearly along the flow, which indicates that in this condition, the forces of momentum and friction are balanced and the pressure distribution of the porous pipe increases linearly with the gravity item. When E decreases, the influence of momentum is greater than the friction, and the pressure distribution curve has an upward concave shape. In this condition, the porous pipe flow is momentum-controlled flow, which always appears in the chemical field. Comparison of measured and predicted results Comparison of measured and predicted results under different slopes. If diameter of the pipe D = 0.04775m, the length of the pipe L = 50m (E = 1047), the interval of the holes S = 0.15m, the diameter of the holes D = 0.0012m and the water head at the entrance H = 0.5m, the exponent z in different conditions of the slope can first be calculated according to the Eq (6), and the variation of Eu can be calculated according to Eq (16). Finally, the comparison between the predicted and experimental results is given in Fig 6. Considering the minor velocity (the velocity at the entranceV 0 0.6m/s) and the limited experimental condition, the minor head difference between the start and the entrance (which causes a larger deviation with the maximum relative error is 74.27%), and the analytical results from the calculation and the experimental results are identical with the minimum relative error is 0.02%, which indicates that the porous distribution of the power rate assumption is reasonable. Comparison of measured and predicted results under different lengthdiameter ratios conditions of E are shown in Fig 7. The experimental results indicate that the results obtained for the calculation and experiments are identical. The analytical solution contains two parameters that can affect the results of the pressure distribution. One parameter is the exponent z of the velocity distribution, and the other is the modified coefficient χ of the drag coefficient. For z = 0.8,1,1.2, a comparison between the predicted and experimental results is shown in Fig 8. From Fig 8, we observe that the variation of the exponent z has an effect on the pressure distribution, especially at the closed end. Generally speaking, the deviation of the exponent z from 1.0 is larger, and the influence of z is larger, which is consistent with the experimental results. According to Eq (16), the modified coefficient χ also has a greater effect on the closed end. The selection of the two parameters according to experimental system is a critical problem that should receive additional attention. Comparison with other experimental results. The perforated flow distributors in the chemical field and the perforated pipes in the water supply and drainage field are characterized by the slope I = 0, the minor length-diameter ratio E and the opening ratio η = Nd 2 /(4DL) (η>>η 0 ). In this condition, z>0 can be derived according to Eq (6), which is obviously unreasonable. For these types of pipes, the variation law of the exponent z with the opening ratio η should be cautiously investigated. To simplify the calculation, α = 0.65,β = 0.15, z = 1 and χ = 1.3 are selected. A comparison between the predicted results calculated by Eq (16) and the experimental values measured by Jin et al [4] is shown in Comparisons between the predicted results calculated according to Eqs (16) and (18) and the experimental values from Wang et al [8] are separately illustrated in Fig 10 (D = 21mm, L = 525mm, d = 3mm, N = 21 and E = 25). The mean errors between the calculated results and the experimental values are 3.8% and 2.7%, respectively, which showed that the analytical solution of the variable mass momentum equation was qualitatively and quantitatively consistent with the experimental results. The comparison shows that the results from the calculation and the experiments are identical, which indicates that the analytical solution of the variable mass momentum equation can be applied widely and is suitable for both friction-and momentum-controlled flow, which also appears in the chemical field. For momentum-controlled flow, the friction item is less than the momentum item, the force of the momentum is greater than that of the friction, and the pressure of the porous pipe From Eq (19), we note that E u | X = 1 <0.5 for the momentum-controlled flow, and the analytical results of Figs 9 and 10 are consistent with this view. Conclusions (i) According to the momentum law, the momentum equation of the variable mass, which depends on the momentum exchange coefficient k and the drag coefficient λ, is established in the condition of uniform slope. (ii) The velocity distribution is analyzed according to the perforated distribution of the power rate assumption, and deduction of the empirical formula for the exponent z is based on the experimental data. In the process of engineering design, we can adjust the slope, the length-diameter ratio, and the opening ratio of the porous pipe to make the exponent z tend toward 1 and reduce the relative deviation of the discharge. Thus, we can ensure the uniformity of different holes. Analytical equation in perforated pipe (iii) We analyze the variation laws of the drag coefficient λ and the momentum exchange coefficient k and deduce the function relationship of k. The function relationship contains two constant parameters. One parameter is α, which denotes the momentum principle of the first hole, andα≌0.65. The other parameter is (α−β), which indicates the momentum principle of the last hole and (α−β) = 0.5 and β≌0.15 at the closed end. (iv) Analysis of the momentum equation indicates that the major forces for the porous pipe are hydrodynamic pressure, gravity and friction. These three forces are interrelated and impact the perforated flow. The analytical solution is composed of the components of pressure, the gravity, friction and momentum, which reflect the comprehensive influence of the factors. In the process of engineering design, if the slope given, we can adjust the length-diameter and Reynolds numbers to balance the influences of gravity, momentum and friction. In this condition, the hydrodynamic pressure distribution tends toward a uniform distribution, and the discharge from the different holes is the same. (v) Verifications from previous studies and the experimental data indicate that the analytical results from the calculation and the results from the experiments are identical which the mean errors are 8.2%, 3.8% and 2.7%. This outcome shows that the analytical solution of the variable mass momentum equation can be applied widely and is suitable for both frictioncontrolled flow, which is influenced by gravity and friction, and momentum-controlled flow, which is influenced by momentum and friction.
8,132.6
2017-10-24T00:00:00.000
[ "Engineering" ]
Neutron and charged particle identification by means of various detectors Identification methods for photons, neutrons and heavy ions by using scintillator and semiconductor detectors are discussed by stressing the advantages of signal digitization. The detector arrays including a large number of individual cells, e.g . [1, 2], used in nuclear physics experiments, require automatic procedures to identify the reaction products. This is accomplished starting from the signals EPJ Web of Conferences DOI: 10.1051/ , epjconf 2012 / 00028 (2012) 31 3100028 This is an Open Access article distributed under the terms of the Creative Commons Attribution License 2.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. C © Owned by the authors, published by EDP Sciences , 2012 SIF Article available at http://www.epj-conferences.org or http://dx.doi.org/10.1051/epjconf/20123100028 induced in different detection materials like organic scintillators for neutrons and photons, and inorganic scintillators, diamond or silicon, for charged particles. The description of their response functions has to remain simple and accurate in the same time. One must take into account the involved nonlinear processes like ”quenching”, electric carrier recombination and plasma delayed collection. The digitization of the signal, giving access to its shape as a function of time, increases the performance and opens new perspectives. 1 Neutron identification in liquid scintillators The organic liquid scintillators like BC501A (the former NE213), available in big volume cells, are based on the molecular fluorescence induced at the passage of energetic photons, muons or neutrons. The emission of light accompanying the relaxation between singlet states is the fuorescence (≈ 10−9 s in liquid scintillators) while phosphorescence (> 10−6 s) means the radiative de-excitation, of low probability, from a triplet to the singlet ground state. Between, delayed fluorescence (10 to 100 times slower than fluorescence) also occurs at high densities of molecules excited in triplet states [3]. Photons and muons interacting with scintillator materials often transfer part of their energy to fast electrons which, due to their rather low stopping power, produce a weak density of excited molecules, decaying in turn by fluorescence. Neutrons transmit their energy to the host hydrogen nuclei characterized by a high electronic stopping power. Mainly via π-electron ionization followed by recombination, these protons cause a high density of excited molecules decaying by fluorescence, delayed fluorescence or phosphorescence [3]. Consequently, the scintillation signal following the interaction of a neutron, will contain, besides the ”prompt” decay associated to fluorescence, a slow component, especially connected to delayed fluorescence. As for the slow phosphorescence, although present, it is not exploited at the high rate of events recorded in nuclear physics experiments. It is this difference of response to protons and fast electrons that makes possible the neutron/gamma (n − γ) discrimination by pulse shape analysis. An example is shown in Fig. 1 for photonand neutron-induced scintillations in a BC501A cell. The signals, normalized to the same total charge (related to the transferred energy), were averaged in several charge slices covering the light yield range 200 3000 keVee (see below). The associated photomultiplier signals were digitized by means of the Fast Acquisition SysTem for nuclEar Research (FASTER) developped at LPC Caen. While the γ-induced signals have identical shapes, those excited by neutrons show a larger slow fraction as the energy transferred to protons decreases. Additionally, the alEPJ Web of Conferences The detector arrays including a large number of individual cells, e.g.[1,2], used in nuclear physics experiments, require automatic procedures to identify the reaction products.This is accomplished starting from the signals induced in different detection materials like organic scintillators for neutrons and photons, and inorganic scintillators, diamond or silicon, for charged particles.The description of their response functions has to remain simple and accurate in the same time.One must take into account the involved nonlinear processes like "quenching", electric carrier recombination and plasma delayed collection.The digitization of the signal, giving access to its shape as a function of time, increases the performance and opens new perspectives. Neutron identification in liquid scintillators The organic liquid scintillators like BC501A (the former NE213), available in big volume cells, are based on the molecular fluorescence induced at the passage of energetic photons, muons or neutrons.The emission of light accompanying the relaxation between singlet states is the fuorescence (≈ 10 −9 s in liquid scintillators) while phosphorescence (> 10 −6 s) means the radiative de-excitation, of low probability, from a triplet to the singlet ground state.Between, delayed fluorescence (10 to 100 times slower than fluorescence) also occurs at high densities of molecules excited in triplet states [3]. Photons and muons interacting with scintillator materials often transfer part of their energy to fast electrons which, due to their rather low stopping power, produce a weak density of excited molecules, decaying in turn by fluorescence.Neutrons transmit their energy to the host hydrogen nuclei characterized by a high electronic stopping power.Mainly via π-electron ionization followed by recombination, these protons cause a high density of excited molecules decaying by fluorescence, delayed fluorescence or phosphorescence [3].Consequently, the scintillation signal following the interaction of a neutron, will contain, besides the "prompt" decay associated to fluorescence, a slow component, especially connected to delayed fluorescence.As for the slow phosphorescence, although present, it is not exploited at the high rate of events recorded in nuclear physics experiments.It is this difference of response to protons and fast electrons that makes possible the neutron/gamma (n − γ) discrimination by pulse shape analysis.An example is shown in Fig. 1 most unique crossing point on the falling part (at around 30 ns) suggests the position of a favourable border between two time gates of current integration giving Q f (fast) and Q s (slow) -two adequate fractions of the total charge Q t .Several analysis algorithms have been imagined [4], recently based on digital acquisition [5].The traditional method is based on the Q s -Q t correlation, as in the two-dimensional plot obtained with standard VME analog electronics shown in Fig. 2, left panel.The muons from the electromagnetic showers induced in the Earth's atmosphere by high energy cosmic rays are placed in the same geometric locus as the γ-rays, but at higher Q t .By means of γ sources with known energies, the total charge was calibrated into MeV electron equivalent (MeVee).The energy deposited in the cell by the mentioned muons is useful at higher Q t [1].A figure of merit (F OM) is used to quantify the discrimination quality in a total charge slice ΔQ t .It is frequently calculated as the ratio of the distance between the ridge lines of the two populations in Fig. 2 and the sum of their widths at a given fraction of the neutron maximum, in the considerred ΔQ t slice.The problem occurs at low transferred energies inducing low light yield and hence a small Q t .Due to the associated large fluctuations, the two geometrical loci -of the neutrons (via the recoil protons) and of the photons (recoil electrons) merge. In this case, it may be important to estimate the overlap of n-and γ-loci, in particular the contamination (CON ) of the neutron geometrical locus by γ events.Both quantities F OM and CON are obtained, in each abscissa slice of Fig. 2 (left panel), by fitting -e.g. with the sum of two gaussians -, the number of events in the slice as a function of the discriminating variable Q s .Taking as reference level one tenth of the neutron maximum for example, one gets F OM 1/10 and the contamination CON (very sensitive to Q t ), plotted in Fig. 2, right panel.Together, these quantities are well reporting on the n − γ discrimination capability of a detector.The p-Terphenyl single crystal scintillator allows a better discrimination, indeed, but the available size is still limited.New plastic scintillators offer a promising alternative [6]. The accuracy of the discrimination depends much on the correct determination of the base line and the start moment of the signal.These aspects were carefully treated in FASTER -a digital acquisition system allowing the transfer of a high flux of time stamped events.It is appropriate for organic scintillators (500 MHz/12 bits digitizer) and for inorganic scintillators and semiconductors (125 MHz/14 bits).FASTER is provided with FPGA-implemented single channel algorithms like QDCs, ADCs, CFDs.By using the time stamped events, one may realize a TDC like function to get the time difference between start and stop signals.The two dimensional constant fraction discriminator allows to set thresholds in amplitude and time width for accepting a signal.The CFD zero crossing time, calculated by parabolic interpolation, has a numerical precision of 8 ps and hence the actual time resolution is practically dictated by that of the detector, of the order of 1 ns for our liquid scintillator and the photomultiplier read-out.A dynamical base line may be measured in a time gate preceeding the signal, and restored after it.As the signal is digitized very close to the read-out, the code for calculating different quantities of interest is written in the dedicated FPGA, accelerating the data flux towards a flexible tree architecture, in which decisions are taken afterwards at different levels.Entire signal waveforms for an off-line analysis may also be stored at a rate of ≤ 100 kHz.This generic and modular system is easily extensible from a few to about one hundred acquisition channels.FASTER is a versatile system opened to small and mean scale experiments.We have recently used it to measure the neutron cross talk yield between two liquid cells.Neutrons of 1.5 MeV were produced via the 3 H(p, n) 3 He reaction by means of a proton pulsed beam characterized by a period of 400 ns and a bunch width of a few ns.The detectors were placed at two different distances and angles (both of them larger for the second cell) with respect to the target and the beam direction, respectively.The TDC like function of FASTER allowed the time of flight (TOF) measurement and hence the determination of the neutron energy.The left panel of Fig. 3 displays the discriminating variable Q s /Q t as a function of the TOF for the nearest cell.The γ-induced events are localized in the lower, thin horizontal branch, and one may distinguish the direct γ peak at ≈ 80 ns.The n-induced events populate the upper branch, with a peak at ≈ 250 ns corresponding to the 1.5 MeV neutrons and a long tail of slower neutrons.Below this neutron peak, in the γ-induced event branch, note also the neutrons viewed as γ-rays via the reaction (n, n γ) on nuclei in the proximity of the cell.By simultaneousely looking at the TOF of the two scintillator cells -Fig.3, right panel -, one may disentangle various correlations.The neutrons belonging to the main peak arrive in the first detector at ≈ 250 ns, while in the second one, at ≈ 270 ns.The probability to detect two neutrons from the same bunch is negligible.The vertical line corresponds to photons randomly arriving in the second detector in coincidence with the neutrons from the main neutron peak in the first detector, and viceversa for the horizontal line.The protuberence at ≈ 310 ns on the vertical line collect neutron cross talk events: neutrons detected and scattered in the first cell and arriving later on in the second one.The simulations will enlight us on the origin of the events located in the diagonal lines.The precise time calibration as well as the global analysis to determine efficien-IWM2011 00028-p.5 cies and cross talk probabilities for various geometrical configurations and different low neutron energies (≤ 4.9 MeV) are in progress [7]. Response of CsI(Tl) scintillators to heavy ions Thallium-activated caesium iodide (CsI(Tl)) scintillating crystals offer the possibility of light charged particle isotopic identification by pulse shape discrimination technique.Their luminescence was quantified in a simple formalism [8] including quenching via electron-hole recombination inside the fiducial volume along the impinging particle path as well as high efficiency scintillation of the δ-rays transporting the fraction F(x) of the electronic stopping power S e (x) outside this volume [9].Under some assumptions [8], the rate of electron and hole local concentration n(x, t) in this volume of high carrier density is driven, in a first approximation, by the equation: with λ A ∝ N A , the activator concentration (Tl centres), λ Q ∝ (N D + N n ), the concentration of "defects" due to thermal vibrations N D and to the interaction of the incident ion with the lattice nuclei N n ∝ S n (x), the nuclear stopping power; λ R stands for the electron-hole direct recombination scintillating in the UV band, not seen by the photomultiplier.The solution of eq. ( 1) and initial condition n(x, 0) = (1 − F(x))n 0 (x), n 0 (x) ∝ S e (x), is of the same type as the expression (13) in ref [10].The infinitesimal light output dL/dx is obtained by integrating over time the first term of eq. ( 1) and the total light output, by numerical integration over the total energy E 0 : ]. (2) It depends on four parameters a G , a n , a R and E δ related respectively to the gain, nuclear and recombination quenching, and the energy threshold for the creation of δ-rays.The corresponding formula (16) in ref. [8] was thus simplified without altering the quality of the light output description for various ions.The corresponding analytical expression was applied for heavy ion identification in Si-CsI(Tl) telescopes [11]. The ordinary Tl doping of about 200 ppm, was optimized for γ-rays and light ions.In order to avoid the temporary depletion of the activator sites due to the high carrier density induced by heavy ions, one may increase, e.g. Diamond detector beam profiler Its high radiation hardness, high thermal conductivity and extreme mechanical strength recommend the Chemical Vapour Deposited (CVD) diamond as a promising material for nuclear physics purposes.Due to its high resistivity, the diamond endures large electric fields (≈ 6 V/μm) in favour of a rapid collection of charge carriers created by ionizing particle.The dark current remains very low at room temperature (< 50 nA) because of the negligible intrinsic carrier concentration as result of the wide gap (≈ 5.5 eV), allowing it to work as a simple (solid) ionization chamber.Due to its small relative permittivity r = 5.7 and hence to a reduced capacitance in comparison with that of a silicon wafer of the same geometry, the current pulses induced in diamond have a very abrupt leading edge.The signals are shorter than those induced in Si, due to the larger mobility of both types of carriers for diamond.The shortcomings are mainly related to the limited available area in case of single crystals (scCVD) and to the bulk polarization [13] for the polycrystalline (pcCVD) plates, englobing graphite inter-spaces acting as carrier traps; some palliatives exist to overcome this latter difficulty [14].Characterization of the stable and radioactive beams throughout the future facilities -in particular the profile and the intensity -must be done with robust beam profilers.The rise time between 10% and 90% of the signal maximum delivered by two pcCVD detectors are shown Fig. 5, stressing the influence of the plate capacitance on the one hand, and that of the electronic chain, on the other hand.The left panel concerns a detector of 22x22 mm 2 area and ≈ 300 μm thickness (C ≈ 80 pF and time constant RC ≈ 3.9 ns) for 10.9 AMeV 58 Ni ions accelerated on the SME line of GANIL.By adding the 0.9 ns contribution of the MATACQ -VME acquisition system (400 MHz band width (BW) and 2 GHz sampling) one gets √ 3.9 2 + 0.9 2 = 4 ns explaining the measured mean value.The histogram in the right panel was obtained with 13.7 AMeV 16 O from one pitch (18x1 mm 2 ) of a double-sided strip detector of ≈ 350 μm thickness (C ≈ 3 pF, time constant ≈ 0.13 ns) by means of the same MATACQ card.The main contribution in the measured mean value comes in this case from the low BW of the MATACQ system.Such segmented detectors may be manufactured in 50x50 mm 2 area plates, completely adapted to get the profile of 10 6 pps beams [14]. Isotope identification in silicon detectors Selected high resistivity silicon detectors give acces to very good heavy ion identification results in ΔE − E telescopes [15], provided that an adequate method be applied.Eq. (1) keeps its validity in case of a silicon detector, the active role to collect the carriers being played by the electric field.The lattice defects dissociate also the carriers without trapping them and the signal is associated to the first two terms.The direct electron-hole recombination is mainly responsible for the eventual pulse shape defect.The signal expression in each of the two detectors of the telescope is thus close to the eq.( 2), the four parameters being determined by a global fit of the geometrical loci coresponding to a wide range of ions in a ΔE − E map.The good isotope identification is evidenced in the two-dimensional upper plot of Fig. 6 (for fixed a n = 0).As the linearization is not yet perfect, the quality is artificially degraded after the projection on the abscissa, presented in a logarithmic scale in the lower panel.This method allows to assign realistic confidence weights for each isotope when the mass separation is not 100 %. The ΔE − E method implies a threshold due to the ΔE detector thickness.Conversely, there is no threshold, at least in principle, for the access to the shape of the digitized signal as a function of time [16].This shape depends, via the plasma delay phenomenon, on the specific electronic stopping power and hence on the nature of the impinging ion.Heavy-ion-induced signals were recently described within a microscopic treatment of screening [17], and by a phenomenological approach based on the dielectric polarization and the progressive dissociation of the carrier pairs [18,19], providing bases to address the funneling effect too [20]. High quality neutron and heavy ion identification is mandatory for nuclear physics experimental investigations and the limits have already been pushed very far.Promising efforts are made now in direction of large volume scintillating materials with new properties, large area scCVD diamond plates and selection of homogeneous silicon wafers -to be tested afterwards by physicists.The digital signal processing has opened new opportunities, eventually promoting new discriminating variables.Realistic simulations of the processes involved in the generation of the signal in various detectors are probably worthwhile investments.They may lead to appropriate "schematic" approximations which, combined with computing power, will hopefully provide improved automatic procedures of calibration and reaction product identification for large scale arrays in nuclear physics. Figure 1 : Figure 1: Average signals induced in a BC501A cell by neutrons and γ-rays. Figure 2 : Figure2: Left: traditional two-dimensional plot Q s vs Q t (calibrated into MeVee) illustrating the separation of neutrons (upper branch) and γ-rays (lower branch) from an AmBe source; the energetic muons are localized in the latter branch, but at higher total charges Q t .Right: figure of merit (stars) and neutron area contamination (in %) by γ-events (triangles) at one tenth of the neutron maximum. Figure 3 : Figure 3: Left: ratio Q s /Q t as a function of the time of flight.Right: Correlated events in two BC501A cells in terms of time of flight.See text for more detail. EPJFigure 4 : Figure 4: Left: light output induced by ≈ 8 AMeV 36 Cl in a 2000 ppm Tl doped CsI(Tl) crystal (symbols connected by a thin line) vs time (4 ns/channel); the other curve stresses the fast peak due to the high Tl doping.Right: data from ref. [12]; the optimum efficiency temperature is lower for α particles (open circles and dashed curve) than for protons (full circles and solid curve). Figure 5 : Figure 5: Comparison of the timing properties of a large area pcCVD detector (left panel) and a strip of small area (right panel).See text for more detail. Figure 6 : Figure 6: Linearization and isotopic identification in a ΔE − E silicon telescope.
4,648
2012-07-01T00:00:00.000
[ "Physics" ]
Biophysical and functional study of CRL5Ozz, a muscle specific ubiquitin ligase complex Ozz, a member of the SOCS-box family of proteins, is the substrate-binding component of CRL5Ozz, a muscle-specific Cullin-RING ubiquitin ligase complex composed of Elongin B/C, Cullin 5 and Rbx1. CRL5Ozz targets for proteasomal degradation selected pools of substrates, including sarcolemma-associated β-catenin, sarcomeric MyHCemb and Alix/PDCD6IP, which all interact with the actin cytoskeleton. Ubiquitination and degradation of these substrates are required for the remodeling of the contractile sarcomeric apparatus. However, how CRL5Ozz assembles into an active E3 complex and interacts with its substrates remain unexplored. Here, we applied a baculovirus-based expression system to produce large quantities of two subcomplexes, Ozz–EloBC and Cul5–Rbx1. We show that these subcomplexes mixed in a 1:1 ratio reconstitutes a five-components CRL5Ozz monomer and dimer, but that the reconstituted complex interacts with its substrates only as monomer. The in vitro assembled CRL5Ozz complex maintains the capacity to polyubiquitinate each of its substrates, indicating that the protein production method used in these studies is well-suited to generate large amounts of a functional CRL5Ozz. Our findings highlight a mode of assembly of the CRL5Ozz that differs in presence or absence of its cognate substrates and grant further structural studies. During development and maturation of the muscle fibers, a series of events occur that lead muscle progenitor cells through morphological and functional transitions. These events require the tight control of the abundance of both structural and regulatory proteins, as well as the selective degradation of embryonic and fetal isoforms that are replaced by their adult counterparts 1 . The ubiquitin proteasome system 2 plays a central role in many of these processes. Protein ubiquitination is a post-translational modification that involves the covalent attachment of ubiquitin (Ub) or ubiquitin chain to the e-amino group of a lysine residue on a target substrate [3][4][5] . Depending on the type of ubiquitination, the Ub-marked substrate is either destined for proteasomal degradation or assumes a conformation that favors its recognition by other protein partners and its intracellular trafficking 6 . The ubiquitination reaction requires the sequential actions of an E1-activating enzyme, an E2-conjugating enzyme, and finally an E3 ubiquitin ligase [3][4][5][6] . The latter defines the selectivity of the target substrates as well as the sites where ubiquitination occurs 7 . Ozz, a member of the suppressor of cytokine signaling (SOCS)-box family of proteins 8 , is the substrate-binding component of the Cullin-RING Ubiquitin Ligase (CRL), CRL5 Ozz (formerly referred to as Ozz-E3), consisting of Elongin B and Elongin C (EloBC), the Cullin protein, Cul5, and the RING-finger protein, Rbx1 1, [9][10][11][12] . Within the Ozz primary structure, the SOCS-box domain is located at the C-terminus of the protein and embeds the EloBC binding site that directs the assembly of the rest of the complex. The substrate recognition site consists of two adjacent neuralized homology repeats (NHR) located at the N-terminus 13,14 . The NHR motif is present in the drosophila protein Neuralized (Neur), which is a single-chain E3 ligase involved in the degradation of Delta and the specification of neural cell fate [13][14][15] . CRL5 Ozz is a unique member of the CRL family of E3 ligases. It is tissue specific, being expressed exclusively in striated muscle; it targets and ubiquitinates selected subpopulations of muscle proteins, which have the common attribute of being fully assembled and components of multiprotein complexes that are linked to the actin cytoskeleton. These include the plasma membrane-associated β-catenin 10 , the fully assembled, sarcomeric MyHC emb 1 and a distinct pool of the Alix/PDCD6IP scaffold protein that bridges the subcortical actomyosin network with membrane complexes 9 . To form an active CRL, Ozz needs to complex with the other components, a process that adds an extra tier to the regulation of substrate recognition and ubiquitination by this ligase 10 . Proper function and regulation of CRL5 Ozz assure the assembly and stability of the contractile sarcomeric unit, as well as the interconnection between membrane complexes and the actin cytoskeleton in skeletal muscle. Ozz ablation in vivo results in defects in myofibrillogenesis and sarcomere assembly 1,9,10 . However, the full spectrum of CRL5 Ozz functions is still unfolding, and only a few of its substrates and their cellular roles in striated muscle have been investigated. To begin to address these questions we have now investigated how CRL5 Ozz assembles into an active E3 complex and how it interacts with three of its substrates. To this end, we have developed a baculovirus (BV)based expression system in insect cells to produce large quantities of the CRL5 Ozz complex and its individual substrates. We have successfully reconstituted a functional and active CRL5 Ozz by combining two, separately expressed subcomplexes, Ozz-EloBC and Cul5-Rbx1. Biophysical analyses of the in vitro reconstituted CRL5 Ozz alone or combined with individual substrates reveal a mode of assembly that differs in absence vs presence of the substrates. Our results hold promise for future structural studies of CRL5 Ozz in combination with its substrates. Materials and methods Generation of CRL5 Ozz BV. Full-length cDNA clones encoding human Ozz (Ozz), Elongin B, Elongin C, Cul5 and Rbx1 were amplified by RT-PCR (Table 1) from commercially available Human RNA (Clontech-Takara Bio). Ozz and Rbx1 fused His tag were cloned into the pFastBac HTb and pFastBac HTc respectably, while Elongin B, Elongin C and Cul5 were cloned into pFastBac1 vector (Life Technologies). The generated plasmids were transformed into DH10Bac competent cell to generate the individual Ozz E3 components bacmids. The recombinant bacmid was transfected into Spodoptera frugiperda (Sf9) insect cells according to the manufacturer's instructions (Life Technologies). The isolated P1 recombinant BV was amplified to generate P2 and P3 virus stocks and then further purified by plaque assay as describe before 16 . To test the expression of Ozz ligase components, Sf9 cells were seeded in 6-well plates (1 × 10 6 cells/well) in serum-free SFX insect cell medium (Hyclone) and infected with BV. The infected cells were incubated at 27 °C and harvested after 3 days. Aliquots (10 μl) of cell lysates were resolved on SDS-polyacrylamide gels and stained with Coomassie blue Brilliant (BIORAD). No Humans subjects were used in this study. Protein expression and purification. Tni PRO (Trichoplusia ni) (Expression system) cells (1 × 10 6 cells/ ml) were seeded in disposable Erlenmeyer flasks (1000 ml/flask; Corning Life Sciences) and infected with Ozz-EloBC or Cul5-Rbx1 BV. Infected cells were incubated at 27 °C for 72 h in an orbital shaker-incubator (135 rpm). Cells were harvested by centrifugation (1000g, 15 min) and resuspended in Tris-HCl lysis buffer (50 mM Tris HCl pH 7.6, 150 mM NaCl, 30 mM Imidazole) and sonicated with 6 pulses for 10 s with a Branson sonicator at setting 3. The cell lysates were centrifuged 2 times at 15,000 rpm for 30 min at 4 °C. Ni-NTA agarose beads (QIAGEN) were spun at 2000 rpm for 5 min and washed with 1 ml of water, spun at 2000 rpm for 5 min, washed once with 2 ml lysis buffer, spun at 2000 rpm for 5 min, then resuspended in 1 ml of lysis buffer (50% slurry), added to the cell lysate supernatant and incubated for 2 h at 4 °C. The lysate was spun at 1000 rpm for 5 min at 4 °C, the resultant pellet was resuspended in 1 ml of washing buffer: (50 mM Tris HCl pH 7.6, 150 mM NaCl and 30 mM Imidazole) and loaded onto an Econo-Pac Chromatography column (BIORAD). The beads were wash twice with 10 ml of washing buffer. The bound proteins were eluted in 0.5 ml Elution buffer (50 mM Tris HCl pH 7.6, 50 mM NaCl, 200 mM Imidazole and 10% Glycerol) and subjected to SDS-polyacrylamide gel analysis stained with Coomassie brilliant blue (BIORAD). www.nature.com/scientificreports/ Reconstitution of CRL5 Ozz in vitro by gel filtration. Ozz-EloBC and Cul5-Rbx1 complexes were mixed in a 1:1 ratio and run through a Superose 6 10/300GL gel filtration column (GE Healthcare). The column was equilibrated with 50 mM Tris pH 7.6, 150 mM NaCl. Sample was applied to the preequilibrated column at a flow rate of 0.3 ml/min. The gel filtration fractions (250 μl) were pooled and concentrated in an Amicon Ultra column (Millipore). 14 μl of the concentrated fractions was heat denatured and run on SDS-polyacrylamide gels to determine their constituents. For calculation of the molecular weight, the column was calibrated with the following protein markers: thyroglobulin, 669 kDa; apoferritin, 443 kDa; β-amylase, 200 kDa; carbonic anhydrase, 29 kDa (BIO-RAD). Immunoprecipitation of CRL5 Ozz . Purified Ozz-EloBC and Cul5-Rbx1 were mixed at a 1:1 ratio and incubated on ice for 1 h. The mixture of the two subcomplexes was resuspended in IP buffer (50 mM Tris HCl pH 7.6, 150 mM NaCl, 500 mM EDTA and 01% NP-40). Gammabind Plus Sepharose beads (GE Healthcare) were washed three times with IP buffer and added to the Ozz ligase and incubated for 1 h. Preclear Ozz ligase was incubated with 2.5 μg of anti Elongin C (BD Bioscience) and 5 μg of Rbx1 (Neomarkers) antibodies for 2 h at room temperature (RT). Samples were immunoprecipitated with Gammabind Plus Sepharose (GE Healthcare) for 1 h a RT. The beads were washed three times with IP buffer and once with IP buffer without detergents. Bound proteins were released by boiling the beads with sample buffer and separated on SDS-polyacrylamide gels under denaturing conditions, followed by SYPRO Ruby Protein Gel Staining (ThermoFisher Scientific). Analytical ultracentrifugation. Purified insect cells expressed Ozz E3 sub complexes and reconstituted Ozz-E3 ubiquitin ligase were subjected to sedimentation velocity in a ProteomeLab XL-I analytical ultracentrifuge with a four-hole rotor (Beckman An-60Ti) following standard protocols 17 . Samples in buffer containing 10 mM sodium phosphate, 1.8 mM potassium phosphate pH 7.2, 137 mM NaCl and 0.27 mM KCl were loaded into cell assemblies comprised of double sector charcoal-filled centerpieces with a 12 mm path length and sapphire windows. The buffer density and viscosity were calculated from its composition using the software SEDN-TERP (http:// www. jphilo. mailw ay. com/ downl oad. htm) 18 . The partial specific volumes and the molar masses of the proteins were calculated based on their amino acid compositions in SEDFIT (https:// sedfi tsedp hat. nibib. nih. gov/ softw are/ defau lt. aspx). The cell assemblies, containing identical sample and reference buffer volumes of 390 µl, were placed in the rotor and temperature equilibrated at rest at 20 °C for 2 h before it was accelerated from 0 to 50,000 rpm. Rayleigh interference optical data were collected at 1-min intervals for 12 h. The velocity data were modeled with diffusion-deconvoluted sedimentation coefficient distributions c(s) in SEDFIT (https:// sedfi tsedp hat. nibib. nih. gov/ softw are/ defau lt. aspx), using algebraic noise decomposition and with signal-average frictional ratio and meniscus position refined with non-linear regression 19 . The s-values were corrected for time and finite acceleration of the rotor and was accounted for in the evaluation of Lamm equation solutions 20 . Maximum entropy regularization was applied at a confidence level of P-0.68. Two-dimensional size-shape distribution, c(s,f/f 0 ) (with the one dimension the s-distribution and the other the f/f 0 -distribution) was calculated with an equidistant f/f 0 -grid of 0.2 steps that varies from 0.5 to 2.5, a linear s-grid from 1 to 20 S with 100 s-values. Tikhonov-Phillips regularization at one standard deviation. The velocity data were transformed to c (s,f/f 0 ), c(s,M) and c(s,R) and distributions with M the molar mass, R the Stokes radius, f/f 0 the frictional ratio and s the sedimentation coefficient and plotted as contour plots. The color temperature of the contour lines indicates the population of species 21 . The signal-weighted-average sedimentation coefficient s w provides a measure of species populations in a system. Therefore, signal-weighted-average sedimentation coefficient values, s w , were derived from the integration of all the species from 3 to 7 S of the c(s) distributions of subcomplex Ozz-EloBC at concentrations 1.8 to 13.3 μM. This measured isotherm of s w as a function of solution composition were then modeled as a reversible monomer-dimer self-association system using SEDPHAT (https:// sedfi tsedp hat. nibib. nih. gov/ softw are/ defau lt. aspx). The association scheme was A + A ↔ (A) 2 with K D12 the dimer dissociation constant, A the monomer and (A) 2 the dimer. Nonlinear least square analysis was performed where the equilibrium association constant, K 12 , was optimized in the fit, (K 12 = 1/K D12 ) 22 . The errors for this fit represented the 68% confidence interval (CI) using an automated surface projection method 23 . All plots were created in GUSSI (http:// www. utsou thwes tern. edu/ labs/ mbr/ softw are/) 24 . Analytical glycerol gradient ultracentrifugation-micro-fractionation. A 15-45% glycerol gradient containing 20 mM Tris-HCl pH 7.6, 150 mM NaCl buffer was constructed by layering solutions at decreasing percentages from 45 to 15% glycerol (13 layers, 98 µl each; total volume: 1.30 ml; height: 3.0 cm) in an 11 × 34-mm centrifuge tube. Protein solution (27.5 µl) was layered on top of the gradient followed by 50 µl cold silicon oil to prevent evaporation. The tube was then placed in a pre-cooled bucket and centrifuged for 8 or 12 h at a rotor speed of 55,000 r.p.m. at 4 °C in an Optima TLX preparative ultracentrifuge using a swinging bucket TLS-55 rotor (Beckman Coulter, Fullerton, CA, USA). Deceleration was performed without braking, and the tube was immediately placed on ice. Micro-fractionation of the tube contents was carried out using a BRANDEL automated micro-fractionator equipped with the FR-HA 1.0 block assembly (Brandel, Gaithersburg, MD, USA). The tube was placed in the receptacle, and fractions were removed from the upper surface of the solution by stepwise elevation of the receptacle by a precise increment of height. A total of 27 fractions were collected in a 96-well plate; each fraction was approximately 45 µl in volume, and the bottom fraction was 125 µl 25 . To calculate the molecular weight a mix of the following proteins: apoferritin, 443 kDa (15 μg); β-amylase, 200 kDa (15 μg); Albumin, 66 kDa (15 μg); (SIGMA) were mixed and loaded on top of the glycerol gradient. The markers were centrifuged at the same experimental conditions as the CRL5 Ozz and substrates. In vitro ubiquitination. The Western blotting. Protein concentrations were determined as OD 595, using BSA as standard. 10 µg of soluble protein (100 V, 60 min) was electrophoresed on 12% SDS-polyacrylamide gels or gradient gels (4-20%, BIORAD), and wet-blotted for 3 h at 50 mA. Membranes were probed with specific antibodies at the dilutions listed above, followed by HRP conjugated goat anti-rabbit or anti-mouse IgG (Jackson ImmunoResearch Laboratories). Signals were detected with a West Femto maximum sensitivity substrate kit (Thermo Scientific) on blue films (Midsci). Results Expression and purification of CRL5 Ozz components. To study the assembly of CRL5 Ozz in vitro we chose a BV-based system in insect cells to co-express the 5 components of the ligase complex. For this purpose, we first ascertained that all 5 proteins could be expressed at comparable levels. We therefore performed plaque assays of Sf9 insect cells infected separately with the individual BV-constructs, encoding human Ozz, EloB and EloC, Cul5 and Rbx1, and selected for the best expressing clones. All 5 CRL5 Ozz components appeared to express at similar levels when tested on Coomassie-stained SDS-polyacrylamide gels (Fig. 1a). Their expression was also validated by Western blot analysis using the corresponding monospecific antibodies (Fig. 1b). To obtain an assembled CRL5 Ozz complex, we opted to co-expressed two sets of proteins separately, Ozz-EloBC, and Cul5-Rbx1, rather than co-expressing all 5 components simultaneously that resulted in low yield purification of CRL5 Ozz and unequal rate of expression of the individual proteins. For this reason and for further purification of the subcomplexes, two of the overexpressed proteins, Ozz and Rbx1, carried a histidine (His) tag. To obtain high yield of the overexpressed proteins, we assessed the rate of infection and protein production in two additional insect cell strains, TniPRO and expresSF+, and compared them with those obtained with the routinely used Sf9 cells. Co-expression of Ozz-EloBC and Cul5-Rbx1 in TniPRO cells, followed by His tagpurification of the two subcomplexes, showed a twofold higher expression of the purified protein complexes than in expresSF+ cells and about 3-4 fold higher expression than in the original Sf9 cells (Fig. 1c,d) These results established Tni PRO as the most suitable and reliable insect cell strain to obtain high quantities of the overexpressed subcomplexes. We also tested the optimal buffer composition and pH that afforded the best purification profile and gave a consistent quality and yield of the purified complexes ( Supplementary Fig. S1). No differences among the three buffer conditions were observed ( Supplementary Fig. S1). Overall, these results underscore the importance of testing multiple insect cell strains, different buffer compositions and pH to optimize the yield of the co-expressed proteins and to ensure the quality of the final products. Ozz-EloBC and Cul5-Rbx1 assemble in vitro into CRL5 Ozz . Next, we asked whether the Ozz-EloBC and Cul5-Rbx1 subcomplexes could assemble in vitro into the 5-component CRL5 Ozz complex. For this purpose, we carried out hydrodynamic analyses of the separate and combined subcomplexes using two classical techniques: size-exclusion chromatography and analytical glycerol gradient ultracentrifugation. Both these methods separate complexes based on their molecular weight. For size exclusion chromatography, a 1:1 mixture of the two purified subcomplexes (Fig. 2a) was loaded directly onto a gel filtration column. The chromatography profile showed that the mixture of the two subcomplexes eluted from the column mainly in one broad peak (RV = 14.17 ml), corresponding to molecular weight of ~ 340 kDa (Fig. 2b), as calculated from the elution profile of the protein standards (Fig. 2c). SDS-polyacrylamide gel analysis of the eluted fractions showed that the bulk of all 5 proteins of the complex were resolved together in a single fraction (Peak 2, Fig. 2b,d) and only a small proportion eluted in Peak 3. Based on their size distribution the fraction containing all five components of CRL5 Ozz had a calculated molecular weight of ~ 340 kDa (R V = 14.17 ml), suggesting that CRL5 Ozz eluted from the column as a dimer (Fig. 2a-d). To prove that CRL5 Ozz was reconstituted in vitro, the mixture of Ozz-EloBC and Scientific Reports | (2022) 12:7820 | https://doi.org/10.1038/s41598-022-10955-w www.nature.com/scientificreports/ Cul5-Rbx1 subcomplexes was subjected to immunoprecipitation using antibodies against Elongin C or Rbx1. Immunoprecipitated proteins were then separated on SDS-polyacrylamide gels and stained with SYPRO Ruby (Fig. 2e). The results demonstrated that Ozz-EloBC and Cul5-Rbx1 subcomplexes (Fig. 2a) indeed interact with each other and assemble into a 5-component CRL5 Ozz (Fig. 2e). We next fractionated the Ozz-EloBC and Cul5-Rbx1 subcomplexes, as well as the assembled CRL5 Ozz on glycerol density gradients using ultracentrifugation. All gradients were run simultaneously and under the same conditions. Fractions from each gradient were separated on SDS-polyacrylamide gels and stained with SYPRO Ruby. The fractionation profiles of the complexes and their protein components were generated by densitometric measurements of band intensities (Fig. 3a). Molecular weights were calculated based on the fractionation patterns of several gel filtration markers (Fig. 3c). Using this method, we found that the bulk of Ozz-EloBC is contained in fraction 3 to fraction 10, trailing minorly until fraction 12. The fractionation curve for this subcomplex indicated a size range of ~ 60-443 kDa (Fig. 3a, upper panel). The Cul5-Rbx1 subcomplex was fractionated in a nearly identical pattern and similar molecular weight range (Fig. 3a, middle panel). However, the mixture of Ozz-EloBC and Cul5-Rbx1 gave a different fractionation profile with all 5 components eluting together in fractions 5-11 (Fig. 3a, lower panel). The peak for CRL5 Ozz was in fractions 7-9, and trailing until fraction 17, showing a clear shift in molecular weight compared to the individual subcomplexes (Fig. 3a). These results demonstrate that, by mixing the two subcomplexes, the 5-protein components sedimented together in fractions corresponding to sizes of ~ 220-443 kDa (Fig. 3a,c), containing the assembled CRL5 Ozz that again appeared in part dimeric (Fig. 3a). We used albumin as internal control to show that this protein did not shift in molecular weight in presence of CRL5 Ozz , indicating no physical interaction between this protein and the ligase (Fig. 3b). Fig. 4a) and Cul5-Rbx1 ( Table 2, Fig. 4b) subcomplexes, and the reconstituted CRL5 Ozz ( Table 2, Fig. 4c). We first evaluated the sedimentation coefficient distribution profiles, c(s), of the individual Ozz-EloBC and Cul5-Rbx1 subcomplexes and then that of the reconstituted CRL5 Ozz (Table 2, Fig. 4). At 0.35 mg/ml the c(s) distribution profile of Ozz-EloBC consisted of two major separate peaks, each representing dissimilar sized species, indicating that Ozz-EloBC assembled into oligomers of different masses (Table 2, Fig. 4a). One of the two major peaks had a sedimentation value, s 20,w , of 5.70 S, corresponding to molar mass of 101,738 Da, close to the theoretical molecular weight of the 2:2:2 dimer complex (Table 2, Fig. 4a); the other major peak had an s 20,w -value of 3.98 S, corresponding to a molar mass of 55,325 Da, close to the theoretical molecular weight of the 1:1:1 monomer complex (57,591 Da) (Table 2, Fig. 4a). The best-fit weight-average frictional value of 1.30 obtained from the analysis is indicative of a slightly extended globular shape of the protein complex. In contrast, the sedimentation analysis of Cul5-Rbx1 (Table 2, Fig. 4b) of similar concentration, 0.21 mg/ml, showed only one major peak with an s 20,w -value of 4.88 S, that corresponds to a molar mass of 104,744 Da, close to the theoretical molecular weight of a 1:1 protein complex (103,212 Da) (Table 2, Fig. 4b). The best-fit weight-average frictional ratio of 1.63 suggests that the molecular shape of this subcomplex is extensively elongated. Fig. 4c). This result indicated that the Ozz-EloBC and Cul5-Rbx1 mixture has a different sedimentation profile than the separate Ozz-EloBC or Cul5-Rbx1 subcomplexes (Fig. 4a,b). The different peaks obtained with the individual Ozz-EloBC or Cul5-Rbx1 subcomplexes were present, but at a much reduced amount in the mixture, indicating that the two subcomplexes assembled at least in part into a five-protein CRL5 Ozz multimeric complex. From this c(s) distribution one major peak (55% of total protein) was distinguishable from the rest with an s 20,w -value of 7.20 S, that corresponded to molar mass of 159,245 Da, close to the theoretical molecular weight of an 1:1:1:1:1 five-protein CRL5 Ozz (160,497 Da) (Table 2, Fig. 4c). The best-fit weight-average frictional ratio of 1.46 indicates a molecular shape that is elongated. Analysis of the same sedimentation velocity data with the two-dimensional size-shape distribution model, c(s,f/f 0 ), yielded similar results (Table 3, Fig. 4d-f). The Ozz-EloBC subcomplex showed an s 20,w -value of 6.19 S, a molar mass of 108,998 Da, close to the 106,424 Da dimer mass, and a frictional ratio of 1.31, indicating a folded, slightly extended, globular molecular shape (Table 3, Fig. 4d). The Cul5-Rbx1 subcomplex yielded an s 20,w -value of 5.00 S, a molar mass of 105,821 Da, close to the theoretical molecular weight of 103,212 Da, and a frictional ratio of 1.61, indicating an elongated molecular shape (Table 3, Fig. 4e). The Ozz-EloBC and Cul5-Rbx1 mixture showed the presence of a peak (48% of total proteins) with an s 20,w -value of 6.85 S and a frictional ratio value of 1.61 that corresponded to a molar mass of 165,688 Da, close to the five-protein 1:1:1:1:1 CRL5 Ozz molecular weight (160,497 Da) ( Table 3, Fig. 4f). Again, the large frictional ratio suggests an elongated molecular shape. However, CRL5 Ozz also formed complexes with higher stoichiometry; a peak (8% of total proteins) with an s 20,wvalue of 9.95 S and a frictional ratio value of 1.72 that corresponded to a molar mass of 327,193 Da, close to dimeric CRL5 Ozz molecular weight of 320,992 Da. The frictional ratio of this putative dimeric complex of 1.72 indicates a possible partial long-end-to-long-end association of the monomers into the dimer (Table 3, Fig. 4f). By contrast, a similar analysis of the simultaneously co-expressed and purified Ozz + EloB + EloC + Cul5 + Rbx1 complex, displayed several peaks with a c(s) profile, which ranged from 6 to 15 S, indicating the presence of high molar mass species that were most probably complexes with different stoichiometries than those of the "monomer" or "dimer" reconstituted CRL5 Ozz (Supplementary Fig. S2a,b and Supplementary Tables S1, S2). Because Ozz-EloBC can form a dimer, we also investigated the concentration dependent self-association of this subcomplex (Table 4, Fig. 5a,b). At relatively high concentrations this subcomplex formed a dimer. Therefore, the signal-weighted-average sedimentation coefficient, s w , that provides a measure of species populations in a system, was plotted against concentration (Fig. 5a). The dimer dissociation constant value, K D12 obtained from this analysis was 0.70 [0.43, 1.07] µM with the confidence interval, (CI), in parenthesis (Table 4, Fig. 5b). This observation suggests that the Ozz-EloBC subcomplex forms dimers, most probably via an Ozz-Ozz interaction, and strengthens the idea that the Ozz-EloBC and Cul5-Rbx1 mixture can form not only a 1:1:1:1:1 complex but also a dimeric CRL5 Ozz . CRL5 Ozz physically associates with its substrates in vitro. To determine whether the reconstituted ligase complex could recognize and bind to its substrates, the assembled CRL5 Ozz was mixed in vitro with purified preparations of three substrates, glutathione S-transferase (GST)-tagged-β-catenin (~ 112 kDa), GST-MyH-C emb fragment (1041-1941 a.a.) (~ 130 kDa) and GST-Alix (~ 122 kDa) (Fig. 6). Assembled CRL5 Ozz alone or combined with each of the substrates, as well as the three substrates by themselves were subjected to glycerol gradient ultracentrifugation combined with microfractionation, and proteins were visualized on SDS-polyacrylamide gels stained with SYPRO Ruby (Fig. 6). Also, in this case, all gradients were run simultaneously and under the same conditions. Analyses of individual fractions revealed that GST-β-catenin was detected in fractions Table 2. Summary of results of the velocity c(s) analysis of Ozz-EloBC, Cul5-Rbx1 and CRL5 Ozz (Ozz-EloBC + Cul5-Rbx1) complex in 10 mM sodium phosphate, 1.8 mM potassium phosphate pH 7.2, 137 mM NaCl and 0.27 mM KCl buffer at 20 °C. a Total concentration of sample in mg/ml. b Sedimentation coefficient taken from the ordinate maximum of each peak in the best-fit c(s) distribution at 20 °C with percentage protein amount in parenthesis. Sedimentation coefficient (s-value) is a measure of the size and shape of a protein in a solution with a specific density and viscosity at a specific temperature. www.nature.com/scientificreports/ 5-10 ( Fig. 6a, upper panel), while, combined with CRL5 Ozz , in fractions 6-12 (Fig. 6a, lower panel), showing a clear shift in its molecular weight. CRL5 Ozz was eluted in a nearly identical pattern to GST-β-catenin, an indication that CRL5 Ozz co-migrated with GST-β-catenin (Fig. 6a, lower panel). Remarkably, in presence of β-catenin CRL5 Ozz sedimented in the same fractions as its substrate without trailing to lower molecular weight fractions as it did without the substrate (Fig. 6a). A similar protein distribution profile was observed when CRL5 Ozz was (Table 4). (b) The species population plots of fraction protomer concentration vs log total concentration (Molar) with the amounts of monomer and dimer at specific concentrations determined by the K D12 value of the selfassociation model are also shown. (Fig. 6b) or GST-Alix (Fig. 6c). We used albumin as internal control to show that the CRL5 Ozz substrate did not change its sedimentation profile in presence of an unrelated protein (Fig. 6d). Similar sedimentation profiles were observed when samples were ultracentrifuged for a longer period (12 h instead of 8 h) ( Supplementary Fig. S3a-f). Based on the molecular profiles in presence or absence of the substrates, we can infer that CRL5 Ozz interacts with each of them as a monomer. These results indicate that CRL5 Ozz assembles in vitro and retains its ability to recognize and physically interact with each of its substrates (Fig. 6). Purified CRL5 Ozz interacts and ubiquitinates substrates in-vitro. Having purified and reconstituted CRL5 Ozz , we wanted to ascertain whether it could promote the in vitro ubiquitination of its substrates. Therefore, we incubated purified GST-β-catenin (Fig. 7a), GST-MyHC emb (Fig. 7b) and GST-Alix (Fig. 7c) with the purified Figure 6. Glycerol gradient ultracentrifugation microfractionation analysis of CRL5 Ozz and its substrates. Assembled CRL5 Ozz mixed with purified preparations of (a) GST-full length β-catenin, (b) GST-MyHC emb fragment (1041-1941 a.a.) and (c) GST-full length Alix was fractionated from a post-ultracentrifuged glycerol gradient (8 h). Aliquots of each fraction were separated on SDS-polyacrylamide gels and their protein content visualized on gels stained with SYPRO Ruby. The densitometric measurement of band intensity of the proteins in each fraction showed a shift to a higher molecular weight when CRL5 Ozz was mixed with either of its substrates, compared to the molecular weights of CRL5 Ozz or its individual substrates: CRL5 Ozz -β-catenin (~ 272 kDa), CRL5 Ozz -MyHC emb fragment (1041-1941 a.a.) (~ 280 kDa) or CRL5 Ozz -Alix (~ 282 kDa). (d) Sedimentation analysis of Alix mixed with albumin as internal control. The fractions were loaded on an SDSpolyacrylamide gel and stained SYPRO Ruby. Alix and albumin were visible in fractions 2-12 and the profile of either protein was not altered in presence of the other. (Fig. 7a-c). Omission of the ligase or any of the components from the reaction mixtures prevented ubiquitination of the substrates. Lastly, to discern whether CRL5 Ozz complex promoted mono, multi or polyubiquitination of its substrates, we performed in vitro ubiquitination assays with either wild type ubiquitin or a ubiquitin mutant carrying K48R amino acid substitution that abrogates the formation of polyubiquitin chains. Polyubiquitination of Ozz substrates occurred only when we used the non-mutated form of ubiquitin in our ubiquitination reaction (Fig. 7d-f). Altogether, these results indicate that Ozz functions in vitro as the substrate-recognition component of CRL5 Ozz , which recruits and polyubiquitinates β-catenin, MyHC emb and Alix. Discussion CRLs constitute one of the largest family of E3 ligases, which are conserved among species 26,27 . Their pivotal role in cell physiology and homeostasis is evidenced by the pathogenic effects of their impaired or deregulated activity in human diseases, like cancer and neurodegenerative diseases 28,29 . To date, only a hand full of CRL complexes have been described that provide information of their protein components and how they are structurally organized 28,[30][31][32][33] . This is because high expression of their individual full-length proteins, and their reconstitution into active ligase complexes have been difficult to achieve. In addition, despite that over 400 CRL members have been identified, for most of them the natural substrates are still unknown 26,27 . Here we describe the production and purification of CRL5 Ozz , a member of the CRL family of ubiquitin ligases that is specific for striated muscle and is involved in myofibrillogenesis and myofiber differentiation 1,9,10 . Within the complex, the scaffold protein Ozz is the substrate-recognition component 1,9,10 . Ozz embeds two adjacent substrate recognition domains, NHR1 and 2, that the protein shares with the drosophila single chain ubiquitin ligase, Neur 15,34 . Previous biophysical studies on the structural assembly of CRL complexes demonstrated that these multisubunit Cullin-RING ligases function as monomers or dimers 35,36 . These structural configurations of CRL complexes might be necessary for high avidity binding to the substrates and/or for the acquisition of the optimal stoichiometry for substrate recognition 26 . In two prototypical E3 ubiquitin ligases, CRL2 VHL and SCF FBW7 , the interface that drives dimerization was shown to be mediated by the adaptor proteins VHL and FBW7, respectively [36][37][38] . Our biophysical analysis of Ozz-EloBC showed that the complex exists both as monomer and dimer, a characteristic that is not shared by the Cul5-Rbx1 complex. The latter observation suggests that the interface for the CRL5 Ozz dimer is provided by the Ozz-EloBC complex, and more specifically by Ozz itself. Ozz contains two NHR domains forming the bulk of the protein (amino acids 23-244 out of 285). These domains mediate protein-protein interaction, as demonstrated for the NHR domains of the E3, Neur, and are crucial for the oligomerization of this ligase 15,39 . These authors proposed that the NHR domains of Neur might form an intramolecular structure that regulates its substrate recognition and ubiquitination activity 39 . By analogy with Neur, the NHR domains in Ozz may promote substrate recognition as well as CRL5 Ozz dimerization, the latter configuration being abrogated by the presence of the substrate. Crystal structure studies of the CRL2 VHL emphasized the importance of the substrates for the stabilization of the ligase 28 . These authors showed that CRL2 VHL cannot form crystals in absence of a 19-mer peptide of its substrate HIF-1α, and reasoned that the substrate maybe required to confer stability to the conformational arrangement of the ligase that facilitates crystallization 28 . This finding suggests that CRL type ligases are highly flexible and acquire more than one structural orientation to accommodate different substrates. As it is the case for other E3's 40,41 , CRL5 Ozz targets multiple substrates located in different cellular compartments. It is, therefore, conceivable that individual Ozz substrates promote conformational rearrangements of CRL5 Ozz to stabilize the ligase for optimal delivery of the substrates to the proteasome. However, the exact stoichiometry of CRL5 Ozz is still unknown, and further work is needed to define its ultrastructural architecture and the exact mechanism(s) of substrate recognition.
7,786.8
2022-05-12T00:00:00.000
[ "Biology", "Chemistry" ]
Dynamically defined subsets of generic self-affine sets In dynamical systems, shrinking target sets and pointwise recurrent sets are two important classes of dynamically defined subsets. In this article we introduce a mild condition on the linear parts of the affine mappings that allow us to bound the Hausdorff dimension of cylindrical shrinking target and recurrence sets. For generic self-affine sets in the sense of Falconer, that is by randomising the translation part of the affine maps, we prove that these bounds are sharp. These mild assumptions mean that our results significantly extend and complement the existing literature for recurrence on self-affine sets. Introduction The shrinking target problem in dynamical systems investigates the "size" of the set of points that recur to a collection of (shrinking) targets infinitely many times. Letting (X, T, µ) be a dynamical system with invariant measure µ and a collection of (measurable) subsets (B k ) k∈N , B k ⊆ X one investigates R((B k ) k ) = {x ∈ X : T k (x) ∈ B k for infinitely many k ∈ N}. Often these sets are dense in the original space X, as well as G δ , and so dimension theory is used to classify the sizes of such sets. The Hausdorff dimension is the most appropriate choice here, as dense G δ sets have full dimension for, e.g. the packing-, Minkowski-, and Assouad-type dimensions. The shrinking target problem was first investigated by Hill and Velani for Julia sets who analysed their Hausdorff dimension [12] and found a zero-one law for its Hausdorff measure [13]. The shrinking target problem has intricate links to number theory when using naturally arising sets in Diophantine approximation as the shrinking targets. This has received a lot of attention over recent years, see for instance [1,5,18,23,24] for shrinking target sets and [2,6,8,9,16,19,20,21] for related research. The literature of recurrence sets so far has focussed mostly on zero-one laws for conformal and one dimensional dynamics, such as β-transformations, see Tan and Wang [26], and Zheng and Wu [28]. For self-similar and self-conformal dynamics these questions were explored by Seuret and Wang [27], who also gave a pressure formula for the Hausdorff dimension, as well as Baker and Farmer [3] who stated a zero-one law dependent on a convergence condition of the size of the neighbourhoods. Finally, and most recently, Kirsebom, Kude, and Persson [17] studied linear maps on the d-dimensional torus. The above works mostly concern dynamical systems in R 1 or conformal dynamics and transitioning to higher dimensional non-conformal dynamics presents severe challenges. To circumvent the extreme challenges that affinities pose, a common strategy is to "randomise" the affine maps by considering typical translation parameter. This approach was first considered by Falconer in his seminal article [7], whose conditions were significantly relaxed by Solomyak [25] and generalised by Jordan, Pollicott and Simon [14]. This typicality with respect to the translation parameter allows one to say more about the regularity of the attractors and is a commonly employed strategy, see for example [14]. Using such randomisation, Koivusalo and Ramírez [18] gave an expression for the Hausdorff dimension of a self-affine shrinking target problem. They show that for a fixed symbolic target with exponentially shrinking diameter and well-behaved affine maps, the Hausdorff dimension is typically given by the zero of an appropriate pressure function. Strong assumptions are made on the affine system, as well as the fixed target and in this article we significantly improve upon their results. We will show that for a large family of self-affine systems and dynamical targets with nonfixed centres the Hausdorff dimension is given by the intersection of two pressures: one being the standard self-affine pressure function, the other being an inverse lower pressure related to the target. Crucially, we do not expect the target to be fixed and the inverse pressure to exist. Our condition also allows us to investigate the dimensions of sets with a pointwise recurrence, a quantitative version of recurrence for self-affine dynamics. As far as we are aware, this is the first time this was attempted for non-conformal dynamics in higher dimensions. Self-affine sets and symbolic space Let A = {A 1 , A 2 , · · · , A N } be a collection of non-singular d × d contracting matrices. Let t = {t 1 , t 2 , · · · , t N } be a collection of N vectors in R d . Let {1, · · · , N } be a finite alphabet and write Σ n , Σ * , Σ for the union of words of length n, the union of all finite length words, and all infinite words, respectively. For words i ∈ Σ n and j ∈ Σ we write i = i 1 i 2 · · · i n and j = j 1 j 2 · · · to denote the individual letters of i and j. For a word i ∈ Σ * , let | i | denote the length of i. For any two words i, j ∈ Σ, let us denote the common prefix by i ∧ j, that is, be an iterated function system formed by affine maps on R d . For a finite word i ∈ Σ * , let It is a classical result that there exists a unique non-empty compact set Λ ⊂ R d such that To avoid singleton sets we assume that N ≥ 2 throughout. Let us denote by π = π t the natural projection from Σ to the attractor of Φ t , that is, Clearly, π t (i) = f i 1 (π t (σ i)) and so For any ball B, clearly, A(B) is an ellipsoid and, as it was shown in [7, Proof of Proposition 5.1], it can be covered by at most (4|B|) d α 1 (A)···α ⌊s⌋ (A) α ⌈s⌉ (A) ⌊s⌋ -many cubes with side length α ⌈s⌉ (A). The pressure of the self-affine system is defined as where we note that this limit exists because of the subadditivity of ϕ t (A). Further, the pressure is continuous in t, strictly decreasing, and satisfies P (0) = log N and P (t) → −∞ as t → ∞. Throughout the paper we will use the following extra condition: Assume that A is such that for every s > 0 there exists C > 0 and K ∈ N such that for every i, j ∈ Σ * there exists k ∈ Σ K with Similar conditions has been introduced earlier by Feng [10] and Käenmäki and Morris [15]. Feng [10,Proposition 2.8] showed that under a mild irreducibility condition there exists C > 0 and Later, this inequality was generalised by Käenmäki and Morris [15,Lemma 3.5] for the singular value function under more restrictive but natural irreducibility conditions. Unfortunately, the uncertainty of the length of the "buffer" word k in the previous conditions does not allow us to study shrinking target and recurrence sets effectively. We will show in Section 2.4 and Section 5 that under some irreducibility and proximality assumptions, Condition 2.1 holds. Shrinking targets Let (λ k ) k∈N ∈ (Σ * ) N be a sequence of target cylinders. We are interested in the shrinking target set R t ((λ k ) k∈N ) = π t i ∈ Σ : σ k i ∈ [λ k ] for infinitely many k ∈ N . For our sequence of target cylinders, we define the following inverse lower pressure: If lim inf n→∞ |λ k | k < ∞ then there exists a unique solution s 0 to the equation P (s 0 ) = α(s 0 ) ≥ 0, see Lemma 3.4. Otherwise s 0 = 0. We prove that this value gives the Hausdorff dimension of the shrinking target set under some assumptions on the matrices A. Similar result has been obtained by Koivusalo and Ramírez [18] for shrinking targets on self-affine sets. Firstly, they assume that there exists a constant C > 0 such that for every , secondly, they assume that α(t) is taken as a limit. The first condition holds only for a restrictive family of matrices, see Remark 2.6. By using a more detailed analysis on the pressure function, we were able to relax the condition on the limit as well. Recurrence sets Now, we turn our attention to the recurrence sets. Let ψ : N → N, and let β = lim inf n→∞ ψ(n) n . Consider the set S t (ψ) := π t i ∈ Σ : σ k i ∈ [i | ψ(k) ] for infinitely many k ∈ N . Let us define the square-pressure function Note that the limit exists again because of the subadditivity of ϕ t (A). Further, the pressure is continuous in t, strictly increasing, and satisfies P 2 (0) = − log N and P 2 (t) → ∞ as t → ∞. where r 0 is the unique solution of the equation Moreover, L d (S t (ψ) > 0 for Lebesgue-almost every t if r 0 > d. The equation (2.3) applies specifically only to the case when β ≤ 1, for other values of β it needs to be modified accordingly. The condition β < 1 is purely technical and relies on the fact that the buffer word in Condition 2.1 depends on both of the words before and after it. Hence, for recurrence rates greater than 1 it might cause "self-dependence" in the buffer word, which then may not exist. We note that under the stronger assumption on the matrices by Koivusalo and Ramírez [18], Theorem 2.4 can be generalized for any value β ∈ [0, ∞] with a straightforward modification of (2.3) and the proof of Theorem 2.4. Irreducibility of matrices Let us denote by ∧ k R d the k-th exterior product of R d . For A ∈ GL d (R), we can define an invertible linear map A ∧k : Let us consider the following tensor product of the exterior algebras Again, for A ∈ GL d (R), we can define an invertible linear map A : W → W by setting for u = u 1 ⊗ · · · ⊗ u d−1 , We define a linear subspace W of W , which is generated by the flags of R d as follows: We call W the flag vector space. Note that the flag space W is invariant with respect to the linear map A for A ∈ GL d (R). We say that A ∈ GL d (R) is fully proximal if it has d distinct eigenvalues in absolute value. Note that A is fully proximal if and only if A ∧k is 1-proximal for every k if and only if A is 1-proximal on W . We say that the tuple A is fully proximal if there exists a finite product A i 1 · · · A i k formed by the elements in A, which is fully proximal. We say that the tuple A is fully strongly irreducible or strongly irreducible over W if there are no finite collections V 1 , . . . , V n of proper subspaces of W such that A∈A n k=1 AV k = n k=1 V k . Proposition 2.5. Let A be a tuple of matrices in GL d (R) such that A is fully proximal and fully strongly irreducible. Then for every 0 < s < d there exists C > 0 and K ∈ N such that for every Remark 2.6. Koivusalo and Ramírez [18] assumed that there exists a constant D > 0 such that Bárány, Käenmäki and Morris [4,Corollary 2.5] showed that this condition for planar matrix tuples A is equivalent with the following: A can be decomposed into two sets A e and A h such that A e is strongly conformal (i.e. can be transformed into orthonormal matrices with a common base transformation) and if A h = ∅, then A h has a strongly invariant multicone C (i.e. Assuming fully strong irreducibilty and fully proximality is clearly a less restrictive requirement. For instance, in case of planar matrices fully strong irreducibility and fully proximality is equivalent with strong irreducibility and proximality. Using Proposition 2.5 we obtain the following immediate corollaries. Corollary 2.7. Let A be a collection of d×d matrices. Suppose that A is fully strongly irreducible and fully proximal and Corollary 2.8. Let A be a collection of d×d matrices. Suppose that A is fully strongly irreducible and fully proximal and where r 0 is the unique solution of the equation Structure. We prove Theorem 2.2 in Section 3 and Theorem 2.4 in Section 4 using Condition 2.1. First, we derive elementary results on the inverse lower pressure α defined in (2.1) in Section 3.1. We will also recall results about the pressure P and prove the uniqueness of the solution of P (s 0 ) = α(s 0 ). We proceed in Section 3.2 by proving the upper bound to Theorem 2.2 and finish the lower bound proof in Section 3.3 with an energy estimate. Similarly, Section 4.1 is devoted to show the upper bound and Section 4.2 is to show the lower bound of Theorem 2.4. Section 5 contains the proof of Proposition 2.5, which shows that the assumptions in Corollary 2.7 and 2.8 are sufficient. Basic properties and the inverse lower pressure function Let (λ k ) k∈N ∈ (Σ * ) N be a sequence and let α be the corresponding inverse lower pressure defined in (2.1). Observe that by definition Assume that α(s) = 0. Since −1/n log ϕ s (A λn ) ≥ 0, this implies that there is a subsequence n k such that 1/n k log ϕ s (A λn k ) ր 0. But then 1/n k log γ s|λn k | = s|λ n k |/n k log γ ր 0 and so |λ n k |/n k ց 0, as required. For the other direction assume |λ n k |/n k → 0 for some subsequence n k . Then, for any t ≥ 0, Combining this with the trivial inequality α(t) ≥ 0 we get the desired conclusion that α(t) = 0 for all t ≥ 0. Similarly, if the modified pressure function is extremal in the other direction it must be extremal everywhere. The proof is analogous to that of Lemma 3.1 and is left to the reader. Proof. Note that by Lemma 3.1 and Lemma 3.2 the inverse lower pressure satisfies 0 < α(t) < ∞ for all t > 0. Letting t = 0, we have and so This shows that α(t) is continuous at t = 0. For any t > 0 and ε > 0 sufficiently small we have where in the last inequality we applied (3.2) with s = t − ε. Hence, which shows that α(t) is strictly monotone increasing on (0, ∞). For an s > 0, let n k (s) be a sequence for which the lower limit in α(s) is achieved. Then by Hence for every s > 0, This implies that Thus, which together with (3.3) implies continuity. Proof. If lim inf k→∞ |λ k |/k > 0 then the first statement follows by Lemma 3.3 since P (0)−α(0) = log N , and P (t) − α(t) → −∞ as t → ∞ and P (t) − α(t) is strictly motonone decreasing. If lim inf k→∞ |λ k |/k = 0 then by Lemma 3.1 α(t) ≡ 0 for t ≥ 0 and then the uniqueness of the solution follows by the uniqueness of the root of P . The second conclusion follows from the observation that α(t) ≥ 0 for all t ≥ 0. The following lemma is standard, but we include it for completeness. Upper bound to Theorem 2.2 Note that R t ((λ k ) k ) is a lim sup set that can be written as Temporarily fix t ≥ 0. By definition, for every δ > 0 there exists k 0 large enough such that for all k ≥ k 0 . This can be rearranged to give Similarly, for every δ > 0, we obtain for large enough k. For the lower bounds, we note that for all δ > 0 there exists a subsequence k n such that and for large enough k, by submultiplicativity and existence of the limit. Assume that lim inf k |λ k |/k < ∞. Let s > s 0 and note that P (s) − α(s) < 0. We set δ > 0 small enough such that η := P (s) − α(s) + 2δ < 0. Let B be a ball with sufficiently large radius such that f i (B) ⊂ B for all i = 1, . . . , N . Hence, by using the cover given in [7, Proof of Proposition 5.1] we obtain Since s > s 0 was arbitrary, we conclude that dim H R t ((λ k ) k ) ≤ s 0 for all t. Finally, consider the case when lim inf k |λ k |/k = ∞. Let s > 0 be arbitrary and again write γ = max i∈Σ 1 {α 1 (A i )}. Recall that #Σ 1 = N and observe that there exists M such that |λ k | ≥ 2k log N/(s log γ −1 ) for k ≥ M . Therefore γ s|λ k | ≤ N −2k for large enough k. The Hausdorff measure bound above becomes As s > 0 was arbitrary, this shows that dim H R t ((λ k ) k ) = 0 for all t. Lower bound to Theorem 2.2 To simplify the exposition we will abuse notation slightly and write ϕ s (i) instead of ϕ s (A i ) for i ∈ Σ * . For every sufficiently large p ∈ N and s < min{s 0 , d}, we construct a measure ν s p on the symbolic space Σ and investigate its projection under the self-affine iterated function system. Let m k be a sequence on which the lower limit in α(s) is achieved and take a very sparse subsequence such that n k=1 m k ≤ (1 + 2 −n )m n and m n ≥ 2 n where K is the length of the buffer word defined in Condition 2.1. We may further assume, without loss of generality, that m 1 ≫ p and that m k ≥ 2 k . By the pigeonhole principle there exists 1 ≤ p 0 ≤ p + K such that m k = p 0 + (K + p)q for infinitely many q. Again, by taking subsequences, we may assume that m k is always of the form p 0 + (K + p)q for some q. If p 0 > K then we define p 0 := p 0 − K otherwise let p 0 := p 0 + p. We will obtain ν s p as the weak limit of descending measures ν s p,k : Σ → [0, 1]. The construction is fairly intricate and involves splitting the measure into blocks of length p with "buffers" of length K in-between that are given by Condition 2.1. However, at each position m ℓ , we want to append λ m ℓ . To ensure consistency of lengths, we need to slightly modify λ m ℓ by extending the words to be of length p + q(K + p) for some q ≥ 0. To this end we define λ ′ m ℓ = λ m ℓ 11 . . . 1, where the number of symbol 1's is p − |λ m ℓ | mod (K + p). Let For every i 1 , i 2 ∈ Σ * denote the word in Condition 2.1 by k(i 1 , i 2 ) ∈ Σ K . We define a collection of symbols K n by induction. Let K 0 := Σ p 0 Suppose that K n is defined for some n ≥ 0. Then let us define K n+1 as To ease notation let ℓ k denote the length of words in K k . Observe that by construction, every i ∈ K n can be written of the form where for every k ∈ {2, . . . , n + 1}, i k ∈ Ω(ℓ k−1 + K) and k k = k(i 1 k 1 . . . k k−1 i k , i k+1 ). While the cylinders in K n consist of the same number of blocks (n + 1) and buffers (n), their lengths are not necessarily p 0 + n(p + K) due to the different lengths of λ ′ m i . Their lengths are however, by construction, always of length p 0 + q(p + K) for some integer q ≥ n. This ensures that we can construct a lim sup set of codings K by Let η(n) denote the number of λ ′ m ℓ blocks in K n . Then We start by defining ν s p,0 on cylinders of length no less than p 0 by for i ∈ Σ p 0 = K 0 and h ∈ Σ * . This uniquely defines a probability measure on Σ, i.e. ν s p,0 (Σ) = 1. We define ν s p,n on cylinders with prefix in K n by Observe that for any cylinder set O ⊆ Σ, the measures ν s p,k (O) are eventually monotone decreasing and hence ν s p (O) ≤ ν s p,k (O) for all sufficiently large k ∈ N, where ν s p is the weak limit of (ν s p,k ) k∈N . Lemma 3.6. Let k ∈ N 0 . Then, where C is the constant appearing in Condition 2.1. Remark 3.7. Observe that the summations in (3.9) are all over the same set. We have changed the subscript to emphasise these two points of view of K k versus its constituent parts. Proof. The last inequality follows from the submultiplicativity of ϕ s and that ϕ s (k j ) < 1. The first inequality follows inductively from repeated application of Condition 2.1 as follows: The base case k = 0 follows trivially, since K 0 = Σ p 0 . For the induction step assume that (3.9) holds for k ≥ 0. Applying Condition 2.1 to words in K k+1 gives and the induction hypothesis immediately gives which completes the proof. The proof of Theorem 2.2 reduces mainly to the following technical lemma. Lemma 3.8. Let s 0 > 0 be such that P (s 0 ) = α(s 0 ). Then for all 0 < t < s < s 0 and sufficiently large p, Proof. Let p ∈ N be large enough such that γ (s−t)p < C, where 0 < γ < 1 and 1 > C > 0 are the constants appearing in Lemma 3.5 and Condition 2.1, respectively. Since s < s 0 , we have P (s) > α(s) and we can pick δ > 0 such that P (s) − α(s) > 4δ and choose p (which so far only depends on the C and γ) large enough such that we may apply (3.5) and (3.6) with δ, moreover, we require that pδ > KP (s) − 2Kδ. Recall that ν s p is supported on K and note that for all distinct i, j ∈ K, their longest common prefix i ∧ j must be a word of the form i 1 k 1 . . . i n i ′ for some i ′ ∈ Σ ≤(p+K) = p+K k=0 Σ k and n maximal. To see this, assume | i ′ | > p + K. Then, i ′ must have a prefix of the form k n λ ′ m j . But since all words h ∈ K satisfy (σ m j h)| |λ ′ m j | = λ ′ m j , so must j and we obtain i ∧ j = i 1 k 1 . . . i n k n λ ′ m j i ′′ for some finite word i ′′ . This however, contradicts the maximality of n and our claim follows. Note further that by the boundedness of the length of i ′ by p+K as well as the non-singularity of the matrices A i , there exists a universal constant D for the IFS such that (3.11) The double integral (3.10), together with (3.11) simplifies to the following sum Thus, and by definition of K n , by Lemma 3.5 for some 0 < γ < 1. Again, let η(n) denote the number of λ m l blocks in K n . Using Lemma 3.6, we can bound for some c > 0. Then by (3.5) and (3.6) e m i (α(s)+δ) . Applying (3.8) and pδ > KP Clearly, ℓ n ≥ m η(n) + |λ ′ m η(n) | and ℓ n ≥ Now we can apply (3.7) to obtain, Coupling this with the observation that C −1 γ (s−t)p < 1 and (1+2 −n )α(s)−(1−2 −n )P (s))+3δ < 0 for sufficiently large n, the expression above is bounded by a geometric series with ratio less than one and hence is bounded. It immediately follows that (3.10) is bounded and the t energy of ν s p is finite, as required. Proof of Theorem 2.2. To show that dim H R t ((λ k ) k ) ≥ s 0 for Lebesgue-almost every t, it is enough to show that for every t < s 0 we have dim H R t ((λ k ) k ) ≥ t for Lebesgue-almost every t. Let t < s < s 0 and p be as in Lemma 3.8. By Frostman's lemma (see for example [22,Chapter 8]), it is enough to show that where the right-hand side is finite by Lemma 3.8. Now, let us turn to the proof that L d (R t ((λ k ) k )) > 0 for Lebesgue-almost every t if s 0 > d. Let s be such that s 0 > s > d. It is enough to show that (π t ) * ν s p ≪ L d , and by [22,Theorem 2.12], to do so it is enough to prove that where the right-hand side is finite again by Lemma 3.8. Remark 4.1. We note that if β > 1 then the argument above is not optimal. Lower bound for Theorem 2.4 The proof is analogous to the lower bound of Theorem 2.2 with some necessary modifications. Let p ∈ N an integer which will be specified later. Let m k be a sequence on which the lower limit β = lim inf n→∞ ψ(n)/n is achieved and take a sparse subsequence such that Let us choose p 0 as in Section 3.3, so m k = p 0 + (p + K)q for every k ≥ 1 for some q ∈ N. To ensure consistency of lengths again, we need to slightly modify ψ(m ℓ ) by extending the words to be of length p + q(K + p) for some q ≥ 0. To this end we define ψ ′ (ℓ) We construct a measure ν s p similarly to Section 3.3, except that the elements in Ω(k) depend on the previous elements. More precisely, let For every i 1 , i 2 ∈ Σ * denote the word in Condition 2.1 by k(i 1 , i 2 ) ∈ Σ K . We define a collection of symbols K n by induction. Let K ′ 0 := Σ p 0 Suppose that K ′ n is defined for some n ≥ 0. Then let us define K ′ n+1 as Denote by ℓ ′ k the length of words in K ′ k . Observe that by construction, again every i ∈ K n can be written of the form where for every k ∈ {2, . . . , n+1}, i k ∈ Ω(i 1 k 1 . . . i k−1 , ℓ k−1 +K) and k k = k(i 1 k 1 . . . k k−1 i k , i k+1 ). Let η ′ (n) denote the number of recurrences in K n . Then We start by defining ν s p,0 on cylinders of length no less than p 0 by for i ∈ Σ p 0 = K 0 and h ∈ Σ * . This uniquely defines a probability measure on Σ, i.e. ν s p,0 (Σ) = 1. We define ν s p,n on cylinders with prefix in K n by Lemma 4.2. Let r 0 > 0 be such that (1 − β)P (r 0 ) = βP 2 (r 0 ). Then for all 0 < t < s < r 0 and sufficiently large p, Proof. By similar argument to the beginning of Lemma 3.8, it is enough to show that Denote by η(n) the number of returns in K ′ n . By definition, m η(n) is the position of the last return, and it returns to [j | ψ(m η(n) ) ]. Unfortunately, j | ψ(m η(n) ) is not necessarily an element of K ′ k for all k > 0. Let k n be the smallest integer such that ψ(m η(n) ) ≤ ℓ kn , where we recall that ℓ n is the length of the elements of K ′ n . Clearly, for every j = j 1 k 1 . . . k kn j kn+1 ∈ K ′ kn ϕ s (j 1 )ϕ s (j 2 ) . . . ϕ s (j kn+1 ) ≥ ϕ s (j), and for j ∈ K ′ n ϕ s (j | ψ(m η(n) ) )) ≥ ϕ s (j ′ ), where j ′ is the unique element in K ′ kn such that j ≺ j ′ . Moreover, for every j ∈ K ′ n there are n − η(n)− (k n − η(k n ))-many Σ p components in the sequence σ ℓ kn j. Hence, we obtain that Using (4.2), we get Using the defining properties (4.1) of the sequence m n , we have ≤ c ∞ n=1 C −n γ (s−t)ℓn exp − (P (s) − δ)(1 − 2 −n )m η(n) + (P (s) + P 2 (s))(β + δ)m η(n) − 3 log γs2 −n m η(n) Coupling this with the observation that C −1 γ (s−t)p < 1 and −(P (s) − δ)(1 − 2 −n ) + (P (s) + P 2 (s))(β + δ) − 2 log γs2 −n < 0 for sufficiently large n, the left hand side is finite and the proof is complete. Now, the proof of Theorem 2.4 is identical to the proof of Theorem 2.2 by replacing Lemma 3.8 with Lemma 4.2, so we omit it. Justification of Condition 2.1 In this section, we give a sufficient condition under which Condition 2.1 holds. The proof is not only a modification of the proof but also an application of Käenmäki and Morris [15,Proposition 4.1]. First, let us recall some definitions and notations from algebraic geometry, following Goldsheid and Guivarc'h [11] and Käenmäki and Morris [15]. Let us denote by ∧ k R d the kth exterior product of R d . That is, let {e 1 , . . . , e d } be the standard orthonormal basis of R d and define for all k = 1, . . . , d and let ∧ 0 R d = R by convention. The wedge product ∧ : If v ∈ ∧ k R d can be expressed as a wedge product of k vectors of R d then v is said to be decomposable. Let us define the Hodge star operator * : ∧ k R d → ∧ d−k R d to be the bijective linear map satisfying * (e i 1 ∧ · · · ∧ e i k ) = sgn(i 1 , . . . , i d )e i k+1 ∧ · · · ∧ e i d for all where v = v 1 ∧ · · · ∧ v k and w = w 1 ∧ · · · ∧ w k . For A ∈ GL d (R), we can define an invertible linear map A ∧k : ∧ k R d → ∧ k R d by setting A ∧k (e i 1 ∧ · · · ∧ e i k ) = (Ae i 1 ) ∧ · · · ∧ (Ae i k ) and extending by linearity. For every matrix A ∈ GL d (R), there exists a basis of orthonormal vectors {u 1 , . . . , u d } such that Au i = α i (A) and {α 1 (A) −1 Au 1 , . . . , α d (A) −1 Au d } is orthonormal. Hence, the operator norm of A ∧k is Thus, for every 0 < s ≤ d, the singular value function can be written as Similarly, we say that A is strongly k-irreducible if there is no finite collection of proper subspaces V 1 , . . . , V n of ∧ k R d such that n k=1 A∈A A ∧k V k = n k=1 V k . Denote by S(A) the semi-group induced by A. The following lemma is due to Käenmäki and Morris [15,Proposition 4.1]. for every A ∈ S(A) then A is neither strongly k-irreducible nor strongly (k + 1)-irreducible. For two vectorspaces V and W , let us define the tensor product V ⊗ W as follows where for any v 1 , v 2 ∈ V , w 1 , w 2 ∈ W and α ∈ R Let us consider the following tensor product of the exterior algebras We define the inner product of W for u = u 1 ⊗ · · · ⊗ u d− and extend it in a bilinear, symmetric way. We define a linear subspace W of W , which is generated by the flags of R d as follows: We call W the flag vector space. Again, for an A ∈ GL d (R), we can define an invertible linear mar A : W → W by setting for u = u 1 ⊗ · · · ⊗ u d−1 and extending by linearity. It is easy to see that A : W → W for A ∈ GL d (R). Let us denote the restriction of the inner product ·, · ∧ and norm · ∧ to W by ·, · W and · W . We say that A ∈ GL d (R) is fully proximal if it has d distinct eigenvalues in absolute value. Note that A is fully proximal if and only if A ∧k is 1-proximal for every k if and only if A is 1-proximal on W . We say that the tuple A is fully proximal if there exists an A ∈ S(A) which is fully proximal. We say that the tuple A is fully strongly irreducible or strongly irreducible over W if there are no finite collection V 1 , . . . , V n of proper subspaces of W such that A∈A n k=1 Before we prove Proposition 2.5, we need to recall two important tools. Lemma 5.2. Suppose that A is fully proximal and fully strongly irreducible then A ⊤ = {A ⊤ 1 , . . . , A ⊤ N } and A m = {A 1 · · · A m } A 1 ,...,Am∈A are also fully proximal and fully strongly irreducible for m ≥ 1. Proof. Let A ∈ GL d (R) be a fully proximal matrix, and let λ 1 , . . . , λ d and v 1 , . . . , v d be the corresponding eigenvalues and eigenvectors. Then it is easy to see that any nonzero Now, let us suppose that A is not fully strongly irreducible and we show that then A ⊤ is not fully strongly irreducible too. Let V 1 , . . . , V n be proper subspaces of W such that A∈A thus it follows that A ⊤ is not fully strongly irreducible. Similarly, the fully proximality of A implies clearly the fully proximality of A m . Moreover, if A m is not fully strongly irreducible then there exists a finite family of proper subspaces V 1 , . . . , V n of W such that A 1 ,...,An∈A n i=1 A 1 · · · A n V i = n i=1 V i . Thus, the tuple A is not fully strongly irreducible for the family n i=1 m−1 k=0 exists and has rank 1. Moreover, Im(G k (A)) = span{v 1 ∧ . . . ∧ v k }, where v i is an eigenvector corresponding to the i-th largest eigenvalue in absolute value. Moreover, for a fully proximal matrix A ∈ S 0 (A), Im( G(A)) = Im(G 1 (A)) ⊗ · · · ⊗ Im(G d−1 (A)). (5.1) The following lemma is a corollary of Goldsheid and Guivarc'h [11, Theorem 2.14]. Proof. Let us argue by contradiction. First, suppose that A is not strongly k-irreducible for some k ∈ {1, . . . , d − 1}. Let V 1 , . . . , V n be a finite collection of proper subspaces of ∧ k R d such that n ℓ=1 A∈A A ∧k V k = n ℓ=1 V ℓ . Let It is easy to see that V ℓ is a proper subspace of W for all ℓ = 1, . . . , d − 1 and n ℓ=1 A∈A A V k = n ℓ=1 V ℓ , which is a contradiction. Now, suppose that there exists a finite collection V 1 , . . . , V n of proper subspaces of W such that L(A) ⊆ n i=1 P(V i ). Without loss of generality, we may assume that V 1 , . . . , V n is minimal in the sense that L(A) ∩ P(V i ) is not contained in a finite union of subspaces of V i . Indeed, if L(A) ∩ P(V i ) n ′ i=1 P(V ′ i ) for a finite collection of proper subspaces V ′ 1 , . . . , V ′ n ′ of V i , then one can replace V i with V ′ 1 , . . . , V ′ n ′ . Clearly, the procedure terminates in finitely many steps. We will show that for every A ∈ A and every j ∈ {1, . . . , n} there exists i ∈ {1, . . . , n} such that AV j = V i . Clearly, A (L(A) ∩ P(V j )) ⊆ P( AV j ). Since A is invertible on W we get But by the minimality assumption of V 1 , . . . , V n , the subspace A −1 V i ∩ V j must be equal to V j for an i ∈ {1, . . . , n}. Thus, n ℓ=1 A∈A A V k = n ℓ=1 V ℓ , which is again a contradiction. Proof of Proposition 2.5. Let us argue by contradiction. Namely, there exists s > 0 such that for every C > 0 and K ∈ N there exist i C,K , j C,K ∈ Σ * such that for all k ∈ Σ K ϕ s (A i C,K k j C,K ) < Cϕ s (A i C,K )ϕ s (A j C,K ) We may first assume that s / ∈ N, the proof of the integer case is similar and even simpler. For short, let ⌊s⌋ = k and ⌈s⌉ = k + 1 By the singular value decomposition of A i C,K and A j C,K , let ∧ · · · ∧ v (C,K) j . So for every C > 0 and K ∈ N and for all k ∈ Σ K A ∧k . By compactness and possibly taking a subsequence, we may assume that
9,510.2
2021-09-14T00:00:00.000
[ "Mathematics" ]
Hierarchy of many-body invariants and quantized magnetization in anomalous Floquet insulators We uncover a new family of few-body topological phases in periodically driven fermionic systems in two dimensions. These phases, which we term correlation-induced anomalous Floquet insulators (CIAFIs), are characterized by quantized contributions to the bulk magnetization from multi-particle correlations, and are classified by a family of integer-valued topological invariants. The CIAFI phases do not require many-body localization, but arise in the generic situation of k-particle localization, where the system is localized (due to disorder) for any finite number of particles up to a maximum number, k. We moreover show that, when fully many-body localized, periodically driven systems of interacting fermions in two dimensions are characterized by a quantized magnetization in the bulk, thus confirming the quantization of magnetization of the anomalous Floquet insulator. We demonstrate our results with numerical simulations. Disorder plays a crucial role for stabilizing Floquet phases in closed systems. In particular, in the presence of interactions, disorder-induced many-body localization (MBL) provides a mechanism for the system to avoid uncontrollably absorbing energy from the driving field, and thereby to retain nontrivial properties at long times [38][39][40]. Importantly, the requirement of many-body localization does not preclude the system from exhibiting a variety of types of symmetry-breaking and topological order [25,26,37]. In this paper we characterize the topological properties of time-evolution in two-dimensional periodically driven systems of fermions which exhibit either full many-body localization, or a weaker form of "k-particle localization" that we define below [37][38][39][40] (see Fig. 1). Recent results suggest that this class of systems can support a nontrivial topological phase, known as the Anomalous Floquet Insulator [37] (AFI), which can be seen as the generalization of the AFAI to interacting systems (see Refs. 30 and 31). Despite being localized and insulating, the AFI features nontrivial circulating currents in the bulk, which in the noninteracting case (the AFAI) give rise to quantized orbital magnetization [30]. In a geometry with boundaries, the AFI supports thermalizing chiral edge states coexisting with a localized bulk [31,37]. The existence AFI as a stable many-body state of matter rests on the existence of MBL; even if MBL does hold out to infinite times, the phenomenology of the AFI is expected to persist for at least exponentially long times. The motivation of our work is to determine the topological invariant(s) that characterize the AFI. Focusing on the topological characterization of the micromotion of particles in the bulk (i.e., the dynamics which take place within each driving period), we uncover two main results. As our first result, we confirm that, like the AFAI, the AFI is characterized by a quantized magnetization density in regions of the bulk where all states are occupied, as schematically depicted in Fig. 1a. Specifically, the magnetization density is quantized as µ 1 /T where T denotes the driving period, and µ 1 is an integer characterizing the topological phase. This quantization is protected by many-body localization, and µ 1 cannot change under any deformation of the system that preserves MBL. As the second major finding of our work, we uncover a rich new structure of topological invariants that emerges in the interacting case: while periodically driven systems of noninteracting fermions in two dimensions (such as the AFAI) may be characterized by a single invariant µ 1 , their interacting counterparts are characterized by a family of integer-valued topological invariants µ 1 , µ 2 , . . .. The invariant µ encodes information about the contribution to the time-averaged magnetization from -particle correlations. Hence, interactions allow for a richer topological structure in the system. The topological protection of the invariant µ relies on a less restrictive notion of localization than the conventional notion of MBL. Specifically, µ is well-defined and topologically protected when all Floquet eigenstates with up to k particles are localized for some k ≥ l. We term this notion of localization "k-particle localization." (a) The anomalous Floquet insulator (AFI) is characterized by drive-induced circulating motion of particles in the bulk. Nontrivial topology is revealed in a quantized, nonzero magnetization density within regions where all states are filled, given by m = µ 1 T , where µ1 is a nonzero integer. (b) With sufficiently strong interactions, a new class of interaction-induced topological phases can emerge, which we term correlation-induced anomalous Floquet insulators (CIAFI's). CIAFI phases are characterized by a quantized, nonzero contribution to the magnetization from -particle correlations. Such correlations can for example arise due to immobilization of many-particle bound states, as depicted in the figure. (c,d) Topological phase transition between the AFI and a CIAFI phase with µ2 = 2 obtained from numerical simulations of a driven Hubbard-like model (see Sec. IV for details). (c) Contribution to the time-averaged magnetization in the system due to two-particle correlations, S2 (see Sec. I for definition and relationship with µ2), as a function of the interaction strength V . (d) The correlation length ξ in the system diverges for interaction strength V comparable to the hopping J, indicating a topological transition between AFI and CIAFI phases. Many-body localization corresponds to k-particle localization in the limit where k and the system size goes to infinity, while allowing the particle density to be finite in the thermodynamic limit. While the existence of MBL in more than one dimension is still a subject of debate [41], k-particle localization for finite k is well established in any dimension [42]. It is likely that systems exhibiting k-particle localization, even if not fully MBL, may still display long-lived transient phenomena: delocalization in such systems must be induced by k+1-particle correlated processes, whose rates are expected to be exponentially suppressed in k for sufficiently weak interactions. Our results above show that k-particle localized Floquet systems of interacting fermions in 2D are characterized by k independent topological invariants, µ 1 , . . . µ k . When one or more of the higher-order invariants are nonzero, the system is in a new, strongly-correlated, intrinsically nonequilibrium phase that is topologically distinct from any noninteracting system, including the (noninteracting) AFAI. We term this class of phases Correlation-Induced AFIs (CIAFIs). Here we consider a broader notion of the term "phase" than for equilibrium systems; in the sense we consider here, a phase charac-terizes the structure of the Hamiltonian of the isolated system, independently of the particular state of the system (and in particularly, independently of particle density and temperature). We present a family of models which interpolate from the AFI phase to a CIAFI phase with a nonzero value of µ 2 , and demonstrate the existence of a nontrivial CIAFI phase in the model through numerical simulations [see Fig. 1 The arguments leading to the identification of the higher-order invariants µ can in principle also be applied to bosonic systems where the total number of bosons is conserved (e.g., as in systems of bosonic atoms in optical lattices). Hence AFI and CIAFI phases also exist for k-particle localized bosonic systems. However, for simplicity, in this paper, we consider fermionic systems only. The rest of the paper is organized as follows. In Sec. I, we summarize the main results of this paper. In Sec. II we briefly review the structure of the Floquet operator in many-body and k-particle localized systems, and of the orbital magnetization operator. In Sec. III we use the time-averaged magnetization density operator to identify a set of topological invariants {µ } that characterize the AFI phase, and show that nonzero values of the invariants give rise to a quantized magnetization density in regions where all sites are occupied (Sec. III D). In Sec. IV we present a family of models that realize both the AFI and CIAFI phases, and support our conclusions with numerical simulations of these models. We conclude with a discussion in Sec. V. I. SUMMARY OF MAIN RESULTS We begin by summarizing the main results of this paper. We consider a two-dimensional periodically driven systems of interacting fermions, which is k-particle (or many-body) localized due to disorder [43]. To characterize the topology of the system, we quantify the circulating motion of particles in the bulk. This circulating motion can be captured through the time-averaged magnetization density operator of each plaquette p in the Heisenberg picture,m p . The magnetization densitym p measures the total time-averaged current that circulates around the plaquette; see Sec. II for a definition of this operator and a review of its properties. From its intrinsic properties, we show that the trace ofm p defines a family of topological invariants for the system. Specifically, the trace ofm p in the -particle subspace, Tr m p , for each = 1, . . . k, must take the same value for each plaquette in the system; this value cannot change under any smooth deformation of the parameters of the system that preserves k-particle localization. Hence Tr m p for each = 1, . . . k constitutes a topological invariant of the system. The intrinsic invariants µ 1 . . . µ k described in the introduction are constructed by forming system-size independent, integer-valued combinations of the (system size dependent) invariants Tr 1mp , . . . Tr kmp ; see Sec. III C for further details. To illustrate the physical meaning of the invariants {µ }, consider first the case where the system holds a single fermion, initially located on site i in the lattice (we assume, without loss of generality, that each site holds a single orbital). When all single-particle Floquet eigenstates are localized, the particle will remain confined near site i at all times. However, the driving field may cause the particle to undergo circulating motion, as schematically depicted in the bottom left of Fig. 1(b). This circulating motion gives rise to a nonzero long-timeaveraged (orbital) moment,M i . For both single-and many-particle systems (which we consider below), the total time-averaged magnetic moment can be computed as the integral of magnetization density over the entire lattice, pm p a 2 . Ref. [31] showed that the sum ofM i over all single-particle states, S 1 ≡ iM i , is quantized as an integer times A/T , where A denotes the area of the system; this integer defines µ 1 . As an implication, magnetization density is quantized in the bulk of the system in regions where all states are occupied. We now consider the dynamics resulting from initializing the system in a two-particle state where sites i and j are occupied. We letM ij denote the total longtime-averaged magnetization of the system resulting from this initialization. In the absence of interactions, one can verify thatM ij =M i +M j . However, with interactions present,M ij generically differs fromM i +M j when sites i and j are close to each other. The deviation can be measured by the "magnetization cumulant" C ij ≡M ij − (M i +M j ). In Sec. III below, we show that, when all 1-and 2-particle states are localized, the sum of C ij over all distinct two-particle configurations, S 2 ≡ i<j C ij , must be quantized, as an integer µ 2 times A/T . The number µ 2 cannot change under any perturbation that preserves localization of states with 1 and 2 particles. Thus, µ 2 is a topological invariant protected by 2-particle localization, and characterizes the contribution to the magnetization associated with 2-particle correlations. The higher-order invariants, µ for > 2, are defined analogously to µ 2 from higher-order "cumulants" of the magnetization (see Sec. III C for details), and µ is protected under any perturbation that preserves -particle localization. We term the class of phases characterized by nonzero values of the higher-order invariants (i.e., µ for > 1) as correlation-induced anomalous Floquet insulators (CIAFIs). The AFI phase is the MBL extension of the noninteracting AFAI, where all higher-order invariants must be zero, and can thus only be characterized by a nonzero value of µ 1 . Hence the CIAFI phases are distinct from the AFI. In Sec. IV we present a model that realizes a CIAFI phase with µ 2 = −2. The model consists of spin-1/2 fermions on a bipartite square lattice with Hubbard-like on-site interactions and disorder, subject to the 5-step driving protocol of the canonical AFAI model [16,30,31] [see Fig. 3(a)]. As discussed in Sec. IV, and shown numerically in Fig. 1(c), the strength of the Hubbard-type interactions, V , controls the topological phase of the model [see Fig. 1(b)]: when interactions are absent (V = 0), the system is in the AFAI phase with µ 1 = 2, while all higherorder invariants take value zero [31]. When interactions are weak, but finite, our numerical results indicate that many-body localization persists, and hence the system remains in the AFI phase with µ 1 = 2 (here the factor of 2 accounts for the two spin species). In particular, the values of all higher-order invariants must remain zero [S 2 = 0, see Fig. 1(c)]. However, when interactions are much stronger than the tunneling rate between the sites, J, they act to block tunneling to or from doubly-occupied sites, resulting in nonzero values of C ij for such configurations. We demonstrate that this effect drives the model into a CIAFI phase with µ 2 = −2 (S 2 = −2A/T ). In Fig. 1(d), we confirm that the transition between the AFI and CIAFI phases in this model is accompanied by a divergence of the localization length of the two-particle states of the system. II. MANY-BODY AND k-PARTICLE LOCALIZATION IN PERIODICALLY DRIVEN SYSTEMS The main result of this work is to characterize the topological properties of time-evolution in two-dimensional periodically-driven k-particle (or many-body) localized fermionic systems. As a preliminary step, in this section we review the structure of the Floquet operator in such systems. The system we study is a two-dimensional lattice systems of interacting fermions, of physical dimensions L × L, subject to periodic driving. While our results apply to any type of lattice, below we assume for simplicity that the system is defined on a square lattice with lattice constant a and (time-dependent) nearest-neighbor tunneling. The time evolution of the system is described by the time-periodic Hamiltonian H(t) = H(t + T ), where T is the driving period. To avoid complications from the coexistence of thermalizing chiral edge states and a localized bulk [37], we focus on the case where the system is defined on a torus, such that no edges are present [44]. A. Structure of Floquet operator in many-body localized systems We first review the structure of the Floquet operator when the system is many-body localized, i.e., when any state of the system exhibits localized behavior in the thermodynamic limit. The concepts we introduce here also form a basis for our discussion of the more general case of k-particle localization (Sec. II B). When the system is MBL, it has a complete set of emergent local integrals of motion [39,40,45, 46] (LIOMs), {n a }. The LIOMs form a mutually commuting set of quasilocal operators that are individually preserved by the stroboscopic evolution of the system [47]. The number of independent LIOMs in the localized system is given by the dimension D 1 of the system's single-particle Hilbert space. For spinless fermions with one orbital per site, we have D 1 = L 2 /a 2 . The LIOMs {n α } may thus be labelled by a single index α which runs from 1 to D 1 . To make the discussion more concrete, the LIOMs can be identified from the system's Floquet operator [39], U (T ). The Floquet operator is defined as the evolution operator of the system, U (t) ≡ T e −i t 0 dt H(t) , evaluated for a time interval corresponding to one complete driving period T . Here T denotes the time-ordering operation, and we work in units where = 1 throughout. Analogously to nondriven systems, the stroboscopic timeevolution (i.e., the time-evolution at integer multiples of the driving period T ) is conveniently expressed in terms of the eigenstates of the Floquet operator, {|ψ n }, known as Floquet eigenstates. These satisfy U (T )|ψ n = e −iεnT , where ε n has units of energy and is known as quasienergy. Note that each quasienergy ε n is only defined modulo the driving frequency Ω ≡ 2π/T . The stroboscopic timeevolution is hence equivalent to that generated by the static effective Hamiltonian, H eff ≡ n ε n |ψ n ψ n |, since In the many-body localized regime, the effective Hamiltonian takes the form (1) Each coefficient ε α1...a (referred to as a quasienergy coefficient in the following) is associated with a particular combinationn α1 . . .n α k formed from the D distinct LIOMs, and has units of energy. Each sum α1...α in Eq. (1) runs over all D combinations of distinct LIOMs, where a b denotes the binomial coefficient. The above form of the Floquet operator implies that each LIOMn α is preserved by the stroboscopic evolution of the system, and thus the operators {n α } are integrals of motion. We now review some important properties of the LIOMs which we use in the following. Firstly, each LIOMn α can be written in the form of a fermionic counting operator:n α =f † αfα . Heref α is a (dressed) quasilocal fermionic annihilation operator, constructed from the original lattice annihilation and creation operators {ĉ i } and {ĉ † i }, respectively, as: jĉ kĉlĉm + · · · , whereĉ i annihilates a fermion on site i in the lattice. Through the identification of the LIOMs with fermionic counting operators, we note that αn α gives the total number of fermions in the system. Another crucial property of the LIOMs is that each LIOMn α has its support localized around a particular location r α in the lattice. Specifically, the magnitude of the coefficient ψ α i1...i decreases exponentially with the distance s from any of the sites i 1 , . . . i to r α : ψ α i1...i ∼ e −s/ξ f , where the length scale ξ f sets the spatial extent of the LIOMs. Similarly to the LIOMs, the quasienergy coefficients {ε α1...α } also exhibit localized behavior. Specifically, ε α1...α decays as e −d/ξε , where d is the distance between any two of the LIOM centers r α1 . . . r α k ; here ξ ε is another localization length scale (not necessarily identical to ξ f , see Ref. 48). As is evident above, MBL systems may be characterized by several distinct localization lengths [48]. In particular, the LIOM expansion above establishes two length scales, ξ f and ξ ε . In the following, we will make use of an additional relevant length scale, ξ l , which characterizes the spread of time-evolved operators. B. k-particle localization As we explained in the introduction, the topological classification we develop in this work applies to a more general class of systems than those exhibiting full MBL; specifically, the invariants we identify can be defined for any system that is k-particle localized for some nonzero k. As defined in the introduction, k-particle localization is understood as the situation where all Floquet eigenstates holding particles for = 1, . . . k are localized. In the remainder of this paper we will make use of similar notation, such that always refers to a specific particlenumber sector, while k refers to the "degree of localization" of the system: i.e., k is defined as the integer such that Floquet eigenstates in the system with k or fewer particles are localized, while at least one Floquet eigenstate with k + 1 particles is delocalized. For k-particle localized systems, we expect a LIOM decomposition and effective Hamiltonian H eff as defined in Eq. (1) can be written to describe the evolution in Fock space of up to k particles, with the expansion truncated to kth order. Full MBL can be seen as a special case of k-particle localization; specifically, MBL can be understood as the k → ∞ limit of k-particle localization where the localization length of the truncated LIOM expansion described above remains bounded for all k. III. TOPOLOGICAL INVARIANTS OF THE TIME EVOLUTION In this section, as the main result of our work, we characterize the micromotion of k-particle localized systems (which includes the case of MBL as described above). We show that such systems may exhibit non-trivial micromotion, featuring steady-state circulating currents at long times. We characterize these circulating currents by analyzing the time-averaged magnetization density operator of the system. From this analysis we identify a set of topological invariants µ 1 . . . µ k that characterize the steady-state circulating currents that the system may support. 4)]. In many-body localized systems, the time-averaged current passing through a cut C is determined by the difference between the currents circulating around the cut's two end-points, p and q. The currents circulating around plaquette p are measured by the magnetization density operatormp. b) Ampere's law on the lattice. The difference in magnetization densities between two adjacent plaquettes p and q gives the currentĪpq on the bond between them. In a stepwise fashion, below we consider the dynamics of a k-particle localized system in the -particle subspace for each = 1, . . . k (allowing k to be infinite for fully MBL systems). This approach ensures that our our results do not rely on full MBL to be valid, while still applying to this class of systems if such exist. A. Characterization of micromotion To characterize the micromotion of k-particle localized systems, in this subsection we consider the dynamics within the subspace of states holding particles, where ≤ k. Naively, one might expect that the timeaveraged current density in this subspace always vanishes due to localization. Indeed, there can be no net flow of charge across any closed curve. However, for an open curve (or "cut"), as schematically depicted in Fig. 2a, a nonzero time-averaged current may run across the cut due to uncompensated local circulating currents around the curve's endpoints. The total current circulating around a point in a given plaquette is precisely the magnetization density in this plaquette. To establish this relationship in more rigorous terms, we consider the total time-averaged current that passes through a cut C between plaquettes p and q in the lattice, as depicted in Fig. 2a. The operator I C (t) measuring the current through the cut C is given by where I b denotes the bond current operator on bond b (restricted to the -particle subspace) [49], and the sum runs over the set B C of all bonds that cross the cut C [see Appendix A for an explicit definition of I b (t)]. Note that I b (t), and thereby I C (t), depends on time in the Schrödinger picture due to the explicit time-dependence of the Hamiltonian H(t). To characterize the circulating currents in the system, we seek the long-time-averaged expectation value of the current I C for an arbitrary initial -particle state, |ψ . Here we introduce the notation O ≡ lim τ →∞ 1 τ τ 0 dt ψ(t)|O(t)|ψ(t) to indicate the timeaveraged expectation value in the state |ψ(t) . The timeaveraged current I C may equivalently be computed in the Heisenberg picture as I C = ψ|Ī C |ψ , where |ψ denotes the initial many-body state of the system, and I C denotes the long-time-average of the current operator I C in the Heisenberg picture: where U (t) denotes the system's time-evolution operator as defined above. For later, we define O ≡ lim τ →∞ 1 τ τ 0 dt U † (t)O(t)U (t) for any operator O. As argued above, the time-averaged currentĪ C across cut C can only have nonzero expectation value due to localized circulating currents at the cut's two endpoints, p and q. This implies thatĪ C only depends on the details of the system near plaquettes p and q. In Appendix A we verify this intuition, by proving that the opera-torĪ C only has support near the two endpoints of the cut C. Specifically, assuming only k-particle localization and conservation of charge, we show that, within theparticle subspace, where ≤ k,Ī C must take the form where the operatorm p has its full support (up to an exponentially small correction) within a distance ξ l from plaquette p, and similarly form q . Here ξ l is a finite, systemsize independent length scale measuring the spread of operators in the system (within the -particle subspace): specifically, for any time-periodic operator A(t) with a finite region of support R, the long-time averageĀ (when restricted to the -particle subspace) is a local integral of motion with support within a finite distance ξ l from R (up to an exponentially small correction) [50]. Crucially, the operatorm p in Eq. (4) is the same for any cut with an endpoint in plaquette p. Thus, Eq. (4) uniquely defines the operatorm p for each plaquette p in the system, up to a correction exponentially small in system size. Specifically, let plaquette q be separated from plaquette p by a distance d, of order the system size, L. In this case,m p can be identified uniquely from the terms ofĪ C which have support nearest to plaquette p, up to a correction of order O(e −d/ξ l ) ∼ O(e −L/ξ l ). For each plaquette p,m p may be defined from Eq. (4) as described above by considering a cut of length ∼ L (up to an exponentially small correction). The set of operators {m p } obtained in this way then obey Eq. (4) for any two plaquettes in the lattice. In particular, when the plaquettes p and q are adjacent, Eq. (4) implies that m p −m q =Ī pq , whereĪ pq measures the time-averaged current on the bond separating plaquettes p and q, as schematically depicted in Fig. 2b. This relationship is the time-averaged lattice version of Ampere's law, which relates the current density, j, to the magnetization density, m: j = ∇ × m (see Ref. 30). We thus identify the operatorm p as the time-averaged magnetization density in the system at plaquette p [51]. As the above discussion shows, the time-averaged magnetizationm p measures the total current circulating around plaquette p. B. Topological invariance of Tr kmp We now show that, for each value of = 1, . . . k, the trace ofm p in the -particle subspace, Tr m p , takes the same value for all plaquettes in the system. Subsequently (in Sec. III B 1) we show that this universal value is quantized as an integer multiple of 1/T , z . Periodically driven k-particle localized systems of fermions in two dimensions are thus characterized by the k integer-valued topological invariants z 1 . . . z k . We prove the topological invariance of Tr m p through a simple line of arguments. First, Eq. (4) implies: Using the cyclic property of the trace and Recall from Eq. (2) that the current operator I C (t) is given by a sum of bond current operators. Noting that any bond current operator I b (t) is by construction traceless (see Appendix A), we conclude that Tr Ī C = 0. Hence we find: This relation holds for any pair of plaquettes in the lattice. Therefore, for a given disorder realization, Tr m p must take the same universal value for all plaquettes in the system. We now show that the universal value of Tr m p is a topological invariant of the system in the thermodynamic limit (L → ∞) [52]. Consider perturbing H(t) within some subregion R of the system (by a small but finite amount), in such a way that -particle localization is preserved. Before and after the perturbation, Tr m p only depends on the details of the system around the plaquette p, up to an exponentially small correction (due to the exponentially decaying tails of the LIOMs). Hence, for a plaquette p located a distance of order L/2 from the region R, Tr m p0 may only change by an amount of order e −L/2ξ l due to the perturbation. Since Tr m p is given by the same value for all plaquettes in the system, Tr m p must remain unaffected by the perturbation even for plaquettes within the region where the system is perturbed, R. Thus, Tr m p is unaffected by any local perturbation that preserves -particle localization, up to a correction exponentially suppressed in system size. We conclude that Tr m p is a topological invariant of the system, protected by -particle localization. In the following, it is convenient to parameterize the topologically-invariant value of Tr m p by a dimensionless number; we hence let z denote the value of Tr m p in units of the inverse driving period, such that Tr m p = z /T . Quantization of z Here we show that the dimensionless invariant z must take an integer value for each . To do this, we use an approach that generalizes the one employed for the noninteracting case in Ref. 30. This subsection provides a summary of the proof, while full details are given in Appendix B. To begin, we consider the total time-averaged magnetization operator,M ≡ pm p a 2 . Since Tr m p takes the value z /T for all plaquettes in the system, we have To establish the quantization of z , we proceed in two steps. First, we obtain Tr M from the response of the system to the insertion of the weak uniform magnetic field B 0 = 2π/L 2 that corresponds to one flux quantum piercing the torus (note that the flux quantum is given by 2π in the units we employ): we show that, in the thermodynamic limit, whereŨ (T ) denotes the Floquet operator of the system in the presence of the magnetic field B 0 , and | · | denotes the determinant within the -particle subspace. Subsequently, we show that the determinants |Ũ | and |U | must be identical (see also Ref. 30); this implies that Tr (M )B 0 T equals an integer multiple of 2π. Using B 0 = 2π/L 2 along with Eq. (7), we conclude that z must be an integer. To obtain Eq. (8) (which forms the first step in our derivation), we show that the magnetic moment of each -particle Floquet eigenstate, |ψ n , gives the response of its quasienergy, ε n , to the addition of the weak magnetic field B 0 . Lettingε n denote the perturbed quasienergy level in the one-flux system associated with |ψ n (see the following for details, and, in particular, for a discussion of the perturbation-induced resonances), we show in Appendix B thatε Specifically, the sum ofε n − ε n over all -particle Floquet states satisfies where O(e −L/ξ ) denotes some (dimensionfull) correction which goes to zero as e −L/ξ in the thermodynamic limit. We obtain Eq. (8) from Eq. (10) by multiplying with −iT , taking the exponentials on both sides and recalling that |Ũ (T )| = exp(−i nε n T ) and likewise for U (T ). Eq. (10) can be obtained through first-order perturbation theory in B 0 . In Appendix B, we provide a rigorous derivation of this result, along with an exact definition of the one-to-one relationship between the quasienergy levels of the one-and zero-flux systems which Eq. (10) implicitly requires. (In particular, we give the prescription for uniquely identifyingε n for each "unperturbed" quasienergy level ε n .). Here we summarize the arguments: near the region of support of |ψ n [53], the Hamiltonian of the one-flux system,H(t), is given by where θ b denotes the Peierls phase on bond b induced by the magnetic field B 0 , and I b (t) denotes the bond current operator (see Sec. III A and Appendix A). Note that there is a gauge freedom in choosing the Peierls phases; we choose them to be of order 1/L 2 near the region of support of |ψ n (such that the subleading correction in the above expansion ofH(t) can be neglected in the thermodynamic limit). In the thermodynamic limit L → ∞, one may naively expect that the quasienergy spectrum of the one-flux system can be obtained through a first-order perturba- However, note that the convergence of such an expansion to first order is only ensured if the ratio between the matrix elements of δH in the Floquet eigenstate basis and the corresponding quasienergy level spacings, r mn ≡ ψ m |δH(t)|ψ n /(ε m − ε n ), is much smaller than 1 for all choices of -particle Floquet eigenstates m and n. While the perturbation δH(t) is of order L −2 , the many-body level spacing in the -particle subspace is of order Ω/(L 2 ), where Ω ≡ 2π/T denotes the angular driving frequency. Hence, in the thermodynamic limit r mn can potentially be much larger than 1 for certain choices of m and n. However, in Appendix B we provide a careful analysis that confirms our initial expectation: with a probability that goes to 1 in the thermodynamic limit (for each between 1 and k), r nm goes to zero for all choices of m and n. This result arises because states where ψ n |δH|ψ m is nonvanishing must be spatially close, and hence experience local level repulsion. The above discussion shows that the quasienergy level corresponding to the state |ψ n in the one-flux system, ε n , is captured by first-order perturbation theory with respect to δH(t). Expanding the quasienergyε n to first order in δH(T ), we obtaiñ along with the fact that in a Floquet eigenstate the timeaveraged expectation value over one period is identical to the long-time average, we find whereĪ b denotes the long-time average of the bond current I b (t) in the Heisenberg picture (see Sec. III A). Recall from Eq. (4) (see also Fig. 2b) where p b and q b denotes the two adjacent plaquettes separated by the bond b, such that b is oriented counterclockwise with respect to p b [49]. Inserting this result into Eq. (12), we note that each plaquette in the lattice appears four times exactly (namely once for each of the four bonds bounding the plaquette). Rearranging the terms from a sum over bonds to a sum over plaquettes, we thus find ε n − ε n ≈ − p ψ n |m p |ψ n (θ bp,1 + θ bp,2 + θ bp,3 + θ bp,4 ). (13) where b p,i denotes the lattice bond that constitutes the ith edge of plaquette p (counted in clockwise order starting from the positive x-direction), and θ bp i gives the Peierls phase acquired by traversing the bond counterclockwise with respect to p. The sum of Peierls phases θ bp,1 + θ bp,2 + θ bp,3 + θ bp,4 hence gives the flux through plaquette p, and hence yields exactly B 0 a 2 for each plaquette. Eq. (9) follows by usingM ≡ p a 2m p . The rigorous derivation in Appendix B shows that the correction to the approximate equality in Eq. (9) scales with system size as L −4 , and hence is subleading in thermodynamic limit (recall that B 0 ∼ L 2 ). We subsequently use the LIOM structure of the Floquet operator in Eq. (1) to show that, remarkably, these individual corrections approximately cancel out when summed over all -particle states, yielding an exponentially suppressed net correction, which scales with system size as e −L/ξ . This establishes Eq. (10), and thereby also Eq. (8). What remains to be shown is that U (T ) andŨ (T ) have identical determinants in the -particle subspace. We show this using the approach from Ref. 30: the determinant of any time-evolution operator can be found from the time-integrated trace of the Hamiltonian [17]: which can be straightforwardly verified using the spectral decomposition of U (t). Identifying the integrand in the right-hand side above as Combining this with Eq. (7) and using that B 0 = 2π/L 2 , we conclude that z must be an integer. C. Cumulant basis of invariants The above discussion shows that k-particle localized systems are characterized by the k independent, integervalued topological invariants z 1 . . . z k . Here z gives the trace of the magnetization density operator in theparticle subspace (in units of the inverse driving period). However, each z depends on the size of the system, and thus is not an intrinsic property of the system. For instance, in noninteracting systems, z scales as L 2( −1) , where L is the physical dimension of the system [54]. In this subsection we construct linear combinations of the invariants z 1 . . . z k that give an equivalent set of system size independent invariants µ 1 . . . µ k that characterize the intrinsic topological properties of the system. The intrinsic invariants µ 1 . . . µ k can be expressed as the cumulants of the magnetization operator, as discussed in Sec. I. To illustrate, consider the time-averaged magnetic moment,M ≡ p a 2m p , of a state where two particles are initialized on sites i and j, which we denotē M ij . The average of the total magnetic moment, taken over all 2-particle states, is given by 1 D2 (z 2 L 2 /T ), where D denotes the dimension of the -particle subspace. For each i and j, we writeM ij =M i +M j + C ij , where, as in Sec. I,M i denotes the time-averaged magnetization of the system holding a single particle initially located at site i. From this definition of C ij , we find where we used that Tr M = z L 2 /T for = 1, 2. The right hand side is evidently an integer multiple of 1/T . We take this integer to be our definition of the intrinsic invariant µ 2 . Note that µ 2 gives the mean value of S i ≡ j =i C ij over all sites i (recall that C ij = C ji ). Importantly, due to the fact that the two particles only influence each other's motion when they are within a localization length of one another, the cumulant C ij is only significant for O(ξ 2 l /a 2 ) choices of j for each i. The mean value of S i is therefore an intrinsic quantity, which does not depend on the system size; in particular, it remains finite in the thermodynamic limit. In the noninteracting case, C ij = 0, and µ 2 = 0. Thus, µ 2 gives the contribution to the magnetization from 2-particle correlations. We extend this definition to higher numbers of particles, by expandingM in terms of the fermionic annihilation and creation operators, SinceM preserves the number of particles, we havē (16) Without loss of generality, we take M i1...i k ;j1...j k to be nonzero only if i 1 < i 2 . . . < i k and j 1 > j 2 . . . > j k , such that each independent combination of creation and annihilation operators appears only once in the above sum. We see that the expectation value ofM in a singleparticle state |i ≡ĉ † i |0 (where |0 denotes the vacuum state) is given by M i;i . We thus identify M ii =M i , whereM i was defined above. Likewise, in the twoparticle-state |ij ≡ĉ † iĉ † j |0 (where i < j), the expectation value ofM is given by M i;i + M j;j + M ij;ji . We thus identify M ij;ji = C ij . The higher-order cumulants can be defined in a similar fashion, such that C i1,...i = M i1...i ;i ...i1 . Note that the long-time average of an operator in the Heisenberg picture, such asM , must be diagonal in the Floquet eigenstate basis; for example, M i;j is diagonal in the basis of single-particle Floquet eigenstates. Due to localization and the locality of interactions (see above), the coefficient C i1...i can only be nonzero if all sites i 1 . . . i are spatially close (on the scale of ξ l ). Thus, through arguments analogous to those below Eq. (15), for intrinsic quantity of the system. This motivates us to define the -th intrinsic invariant as: To relate µ to the invariants z 1 . . . z k , we take theparticle trace in Eq. (16). (this can be verified from combinatorial arguments), where D 1 = L 2 denotes the dimension of the system's single-particle subspace, we find where we used Tr M = z L 2 /T . By induction, one can verify that each µ is an integer. First, by the definition above, µ 1 equals z 1 , and hence is an integer. For To further elucidate the physical meaning of the intrinsic invariant µ , we express it in terms of the LIOMs that were introduced in Sec. II. Since the long-time average of any Heisenberg picture operator is diagonal in the basis of Floquet eigenstates [55], the operatorm p must be an integral of motion [56]. This requiresm p to take the following form in terms of the of the LIOMs {n α } that we introduced in Eq. (1): Here, for each term involving a products of LIOMs, the sum α1...α runs over the D1 distinct combinations of LIOM indices α 1 . . . α . Due to the finite support of the operatorm p , we note that the coefficient m p α1...α vanishes as e −d/ξ l , where d is the distance from the plaquette p to the center of the most remote of the LIOMs α 1 . . . α . Taking the -particle trace in Eq. (19) and using Tr [n α1 . . .n αν ] = D−ν −ν , we find Comparing with Eq. (18) for each = 1 . . . k, we find Note that µ is independent of the choice of plaquette p. From the expression above, it is evident that µ characterizes the intrinsic topological properties of the system. Since the magnetization coefficients {m p α1...α } vanish when the distance from any of the LIOM centers r α1 . . . r α to plaquette p becomes large, the right-hand side of Eq. (21) is independent of system size in the thermodynamic limit. In essence, µ captures the contribution of -body correlations to the magnetization density. D. Quantized magnetization density in fully occupied regions As a final part of this section, we show that the values of the invariants µ 1 . . . µ k can be measured directly from the magnetization density within a region of the system where all sites are occupied. In particular, for the AFI (which is fully MBL and for which only µ 1 takes nonzero value), the magnetization density is given by µ 1 /T . Consider preparing the system in an -particle state |Ψ R (where ≤ k) by filling all sites in some finite region of the lattice, R, of linear dimension d, with all sites outside R remaining empty (here we assume this requires fewer than k particles). For a plaquette p located deep within the fully occupied region, we find the time-averaged magnetization density as m p = m p R , where we introduced the shorthand O R ≡ Ψ R |O|Ψ R . Using the expansion ofm p in Eq. (19), we thus find: (22) To analyze the sum, we note that, for a LIOMn a whose center r a is located deep within the filled region R, all sites wheren a has its support are occupied. Thuŝ Here the correction arises from the exponentially decaying tail ofn α outside the filled region. For terms in the above equation where the centers of all the LIOMs α 1 . . . α ν are located near the plaquette p, the above result implies that n α1 . . .n αν R = 1 + O(e −d/ξ l ), since all of the LIOMŝ n α1 . . .n αν are located deep within the initially occupied region. For all remaining terms in Eq. (22), one or more LIOMs α 1 . . . α ν are located outside the filled region, and thus reside at least a distance ∼ d from the plaquette p. In this case, the coefficient m p α1...αν is exponentially small in d/ξ l [see the discussion below Eq. (19)]. For both categories of terms we can thus set Ψ R |m p α1...ανnα1 . . .n αν |Ψ R = m p α1...αν , at the cost of a correction of order e −d/ξ l . Doing so, we obtain Using Eq. (21), we identify the -th sum above as the invariant µ /T . Recalling that Ψ R |m p |Ψ R = m p , we thus find: The above discussion thus shows that the magnetization density deep within the filled region is given by the (convergent [58]) sum of the invariants {µ }. In particular, for the AFI, where only µ 1 is nonzero, m p = µ 1 /T . We note that the individual invariants µ 1 . . . µ k may be extracted from the dependence of the magnetization density on the particle density in the system. Specifically, for a random initial state with a uniform, finite particle density ρ, the expectation value n α1 . . .n αν , averaged over all choices of LIOMs, is given by ρ ν . Hence, at finite particle density ρ, the average magnetization density in the system is given by m p ≈ 1 T ν=1 µ ν ρ ν . The values of the individual invariants µ ν can thus be extracted from a fit of m p as a function of ρ. IV. SPECIFIC MODEL AND NUMERICAL SIMULATIONS In this section we present a simple model for a periodically driven system of interacting fermions in two dimensions, which realizes either the AFI or a CIAFI phase. The model was briefly discussed in Sec. I. We first consider the limit of weak interaction. In this regime we argue that the system realizes the AFI phase with µ 1 = 2. Subsequently, we show that, in the limit of strong interactions, the model is characterized by a quantized, nonzero value of the "two-particle cumulant" of the magnetization density, consistently with a CIAFI phase characterized by µ 2 = −2. To support our conclusions, we provide numerical simulations of the model in the to above regimes. The model we consider consists of fermions with spin-1/2 living in a two-dimensional bipartite square lattice with periodic boundary conditions. The Hamiltonian is given by where H dr (t) describes piecewise-constant, timedependent hopping, H dis denotes a disorder potential, while H int describes an on-site interaction between the fermions. The driving protocol, which is contained in H dr (t), is divided into five segments, as depicted in ηT /4, while the fifth segment has duration (1 − η)T ; the parameter η is a number between 0 and 1 which controls the localization properties of the model (see below). In the first four segments, H dr (t) turns hopping on for the four different bond types in a counterclockwise fashion, as indicated in Fig. 3a, while H dr (t) = 0 in the fifth segment. More specifically, in the j-th segment (where j ≤ 4), Hereĉ r,s annihilates a fermion on site r with spin s, and the vectors {b j } are given by b 1 = −b 3 = (a, 0) and b 2 = −b 4 = (0, a). The r-sum above runs over all sites in sublattice A of the bipartite square lattice. We set the tunneling strength to J = 2π ηT , such that, in the absence of disorder and interactions, H dr would generate a perfect transfer of particles across the active bonds in each of the first four segments. The parameter η controls how rapidly the "hopping π-pulses" are applied (and thereby how strong they are relative to the disorder and interaction potentials), and thus controls the localization properties of the model; smaller η yields stronger localization (see Ref. 37). The disorder and interaction terms H dis and H int are constant throughout the driving period and are given by For each site, w r takes a random value in the interval [−W, W ], andρ r,s ≡ĉ † r,sĉr,s denotes the occupancy on site r. The parameter V has units of energy and denotes the strength of the interactions. Note that when V J, tunneling is effectively blocked between doubly-occupied and vacant sites. As we show below, this blocking leads to a nonzero value of the higher-order invariant µ 2 . To characterize the topological properties of the model, we consider the dynamics of particles in the two limits of weak and strong interactions. Below we demonstrate how these two regimes drive the model into the AFI phase with µ 2 = 2 and a CIAFI phase with µ 2 = −2, respectively. We substantiate these conclusions with numerical simulations in Sec. IV A. In the absence of interactions, V = 0, the model in Eq. (24) reduces to two decoupled copies of the AFAI model from Ref. 31. When interactions are weak, but nonzero, Ref. 37 suggests that the phase remains MBL (i.e., non-thermalizing). Since the model should be connected to the non-interacting AFAI, we hence expect the system to be in the AFI phase [37] with winding number µ 1 = 2 (see also discussion in Sec. I). The factor of 2 arises from the extra species of fermions introduced due to the spin-1/2 degree of freedom. We now show that the model above is in a CIAFI phase with µ 2 = −2 in the limit of strong interactions, V → ∞. To see this, we consider the time-averaged magnetic mo-mentM ij (see Sec. III C) that results when initially occupying two single-particle states i and j, where each choice of i or j corresponds to a particular site and spin. Recall that tunneling is blocked when the first particle is located on, or tunnels to, a site occupied by the second particle. Hence, doublons (i.e., states where two particles occupy the same site) remain frozen in place, implying thatM ij = 0 if i and j correspond to the same site being occupied. For all other initial configurations, interactions effectively do not affect the dynamics, and one can verify thatM ij =M i +M j , whereM i denotes the timeaveraged magnetic moment in the single-particle state i. As a result, the "cumulant" C ij ≡M ij −M i −M j takes value −2a 2 /T when the initialization ij corresponds to a doublon configuration, and value zero for all other 2-particle initializations (see Sec. III C for definition of C ij ). We recall from Sec. III C that µ 2 = S 2 T /L 2 , where S 2 ≡ i<j C ij T /L 2 . Since there are L 2 /a 2 distinct doublon configurations, where L denotes the physical dimension of the lattice, we find that S 2 = −2L 2 /T . Thus, µ 2 = −2 in the limit W = 0, V → ∞. From the discussion in Sec. III, we expect the quantization of µ 2 to persist for finite disorder, W , and finite (but large) values of the interaction strength, V . The discussion above shows that the model in Eq. (24) is characterized by two distinct values of the invariant µ 2 in the limits where V = 0 and V → ∞, respectively. Due to the robust quantization of µ 2 , which is protected by 2particle localization, we hence conclude that the system supports two distinct topological phases that arise when V J and V J, respectively. The transition between the phases is separated by a critical point, V c [42]: when V is increased past V c in the thermodynamic limit, the localization length in the two-particle sector should diverge at V = V c , while µ 2 changes abruptly from 0 to −2. A. Numerical simulations Here we substantiate the discussion above through numerical simulations of the model: we first consider the limit of weak interactions, and show that the (quantized) average magnetic moment per particle remains unaffected by the nonzero interaction strength, as our analytical discussion predicts for an AFI phase with µ 1 = 2. Subsequently, we show that that the model is characterized by a quantized, nonzero value of the invariant µ 2 , when V is large, demonstrating that the system is in a CIAFI phase, distinct from the µ 1 = 2, µ 2 = 0 AFI phase. Weak interactions: AFI phase with µ1 = 2 We first present data from simulations of the model described above, in the limit of weak interactions. We consider a single disorder realization of the model with parameters W = 2π/T , V = 0.1 W , and η = 1/16. From Ref. 37, we expect the model is many-body localized with these parameters. Since the model is obtained by adding weak interactions to a model of the AFAI with winding number 2 (see Refs. 30 and 31; here the factor of 2 arises because of the spin degeneracy), we moreover expect the system to be in the µ 1 = 2 AFI phase (i.e., with µ = 0 for > 1). To probe the topology of the system, we compute the mean magnetic moments of random time-evolved 4particle states in a lattice of 6 × 6 sites. The long-time averaged magnetic moment, introduced in Sec. III, is defined asM = p a 2m p . The mean expectation value ofM , averaged over randomly chosen -particle states (i.e., states chosen randomly from a given orthonormal basis) is given by M 0 [ ] ≡ D −1 Tr M , where the binomial coefficient D counts the number of possibleparticle states in the system of D = 2L 2 single-particle states (here the factor of 2 arises due to the spin degeneracy, and L = 6 for the case we consider). Using that Tr M = z L 2 /T , along with Eq. (18), we can express M 0 [ ] in terms of the topological invariants µ 1 . . . µ : particles, our expectation that µ 1 = 2 while µ = 0 for > 1 hence would lead to corresponding to an average magnetic moment per particle of a 2 /T . This result was previously established for the noninteracting limit of the model (where the system is in the AFAI phase) [30]. The discussion above hence shows that the quantized average magnetic moment per particle in the AFAI is unaffected by interactions, as long as the system remains in the AFI phase. To compute M 0 in the simulation, we pick as initial states 1972 random configurations of four particles located on individual sites. We evolve each initialization for 5,000 driving periods with a fixed disorder realization (the same for all initial states). Fig. 3b shows the particle density in the resulting final state for one of the realizations, after evolution for 5,000 periods. White dots and arrows indicate the corresponding initial configuration of occupied sites and spins. Note that the particle density remains non-uniform and confined near the initial location of the particles, consistent with many-body localization. We compute the time-averaged magnetic moment M for each of the 1972 states, using the time-averaged bond-currents. The 1972 values of M we obtained in this way are plotted in the histogram in Fig. 3c. Fig. 3d shows the time-averaged bond currents and magnetization density in the system for the same state used in Fig. 3b, used to calculate the magnetization. The distribution of M obtained from these initializations was found to have mean 3.999997 a 2 /T and standard deviation δM = 0.001a 2 /T , resulting in a standard deviation of the mean at δM/ √ 1972 ≈ 0.00003a 2 /T . This result is consistent with a µ 1 = 2 AFI phase [see Eq. (27)]. Strong interactions: CIAFI phase with (µ1, µ2) = (2, −2) We now demonstrate that strong interactions drive the model into a CIAFI phase with µ 2 = −2. These data were briefly discussed in Sec. I. Here we present them in further detail. To show that large interaction strength drives the model into the CIAFI phase, we keep W and η fixed, but vary V . We moreover consider a single disorder realization with 18×18 sites. For each value of V we consider, we obtained the time-evolution over 1000 driving periods for between 179 and 324 randomly chosen initializations where the two particles were located on particular sites and had distinct spins [59]. To establish the existence of a phase transition between the AFI and CIAFI phase, we considered the localization length in the system. We measured this using the inverse participation ratio of the density in the final state that resulted from each of the initializations we considered, P ≡ ( r |ρ r | 2 ) −1 , where ρ r = s=↑,↓ ĉ † r,sĉr,s denotes the particle density on site r in the final state. When each particle is localized on a particular site, P takes the value 1/4 (in the case of a doublon configuration) or 1/2. In contrast, P = L 2 /4 indicates full delocalization (corresponding to ρ r = 2/L 2 for all r). More generally, P can effectively be seen as 1/4 times the number of sites where the final state has support. This motivates us to define the effective localization length of the system, ξ IPR , as the average value of √ 4Pa 2 obtained from the initializations we probed. In Fig. 1d, we plot the above localization length of the system, ξ IPR , as a function of V . As is evident in the figure, the localization length remains small for small values of V . This indicates that the µ 1 = 2 AFI phase at V = 0 remains stable for finite values of the interaction strength, as was also suggested by the results in Sec. IV A 1. In the range between V = J and V = 10J, the localization length diverges, consistent with a phase transition. For V 10J, the localization length becomes small again, indicating the system has transitioned back into a stable phase. The localization length appears to remain small as V goes to ∞; we hence expect this new phase to be the µ 2 = −2 CIAFI phase. To verify the existence of two distinct phases (namely the µ 1 = 2 AFI and the µ 1 , µ 2 = 2, −2 CIAFI phases), we computed the sum III C or I for definition of these quantities). In Fig. 1c, we plot the value of this sum. The data shows a clear transition between µ 2 = 0 to µ 2 = −2 in the range V = J to V = 10J, where the localization length diverges. This further supports the existence of a µ 1 , µ 2 = 2, −2 CIAFI phase for strong interactions, which is distinct from the AFI phase. V. DISCUSSION In this work, we characterized the topological properties of periodically driven systems of interacting fermions in two dimensions. We established that the quantized magnetization of the AFAI persists in its interacting generalization, the anomalous Floquet insulator (AFIs). As a second result, we identified a new class of intrinsically-correlated nonequilibrium phases, namely the correlation-induced anomalous Floquet insulators (CIAFIs). The topological invariants characterizing the CIAFIs are encoded in the multi-particle correlations of the time-averaged magnetization density. While this work focused on driven fermionic models and their bulk topological invariants, our discussion can be readily extended to bosonic systems with particle number conservation. Importantly, the topological protection of the CIAFIs does not require full many-body localization, but rather relies on k-particle localization, where the system is localized for any finite number of particles up to a maximum number, k. The existence of k-particle localization is well-established [42]. Since the existence of the CIAFI does not rely on full many-body localization, we may expect the behavior described above to be manifested via experimental signatures in the prethermal dynamics of systems which eventually thermalize at long times. Searching for other models that give rise to nontrivial values of these invariants and characterizing the physical properties that they imply will be interesting directions for future studies. We demonstrated that CIAFIs may be realized in a tight-binding model with Hubbard type-interactions subject to a stepwise driving protocol. Recently, a noninter-acting version of such a model was experimentally realized with ultracold atoms in optical lattices [60]. The CIAFI phases may be achieved in a similar experiment by adding Hubbard-type interactions to the system. We expect this type of interactions is natural to implement with ultracold atoms in optical lattices. Thus, we speculate that experimental realization of CIAFI phases is feasible with current experimental platforms. At this point it is not clear whether the CIAFI phases are compatible with MBL, i.e., if they can exist in the thermodynamic limit of L → ∞ and k → ∞. (For finite k, localization is possible, and the physics described above is rigorously applicable.) In particular, we expect that CIAFI phases will exhibit dynamics strongly dependent on the initial state. In the model of Sec. IV, initial states where some large region R is doubly occupied would support chiral edge states moving around such regions. If the initial state contains such "internal edges," they may thermalize and serve as a weak heat bath for the remainder of the system. Next, if the density of filled regions R in the system is increased, we expect that at some point thermalizing internal edges will form a connected network, destroying localization. In contrast, initial states without filled, connected regions are expected to be much more stable, since there are no direct thermalization processes which involve few nearby particles; thermalization, if it occurs at all, will proceed either due to rare thermal inclusions, or due to multi-particle tunneling into, e.g., a state with "internal edges." After the initial posting of this work, another preprint independently classified the bulk topological properties of two-dimensional MBL systems, when particle number conservation was present [61]. Interestingly the classification in Ref. 61 did not contain the CIAFI phases, suggesting that CIAFI phases and MBL may be incompatible. A definite answer for this question, however, remains lacking, and will be an interesting direction for future studies. In any case, the features above suggests that CIAFI phases (rigorously established for finite particle number) may provide a versatile playground for studying the interplay of weak thermalizing baths and MBL regions, which is expected to give new insights into the stability of MBL in 2d. The topological classification we developed in the present work relied on particle number conservation. Chiral phases of spins and bosons without particle number conservation, which are close relatives of the AFAI (with higher-order invariants being zero, µ = 0, > 2), were considered in Ref. 29. It was shown that, when many-body localized, such phases are characterized by a quantized topological index which describes the pumping of quantum information along the edge over one driving period. Such an index arises from the rigorous classification of anomalous local unitary operators in onedimensional systems, developed by Gross et al [62]. It will be an interesting direction of future studies to investigate whether the bulk classification of the present work can be generalized to systems where particle conservation is not present. In the future, it will moreover be interesting to investigate how thermalization is manifested in experimentally realistic situations for the CIAFI phases, and what the corresponding time scales are. With k-particle localization present (for some large k), thermalization must be driven by correlated processes involving more than k particles. It is natural to expect that such thermalizing process will be parametrically slow, and therefore signatures of the CIAFI phases (and the AFI), such as quantization of magnetization, would be observable even if MBL is eventually destroyed. A systematic study of such thermalization timescales will be an interesting question for future studies, with significance beyond the context of topological phases we considered here. alizations where resonances occur between two or more sites separated by a distance comparable to the system size, L. Disorder realizations supporting such accidental resonances do not meet the conditions for k-particle or many-body localization, as defined in Sec. II. However, for a randomly chosen disorder realization within the k-particle localized region of parameter space, the probability that the -particle quasienergy spectrum (for each ≤ k) features any such an accidental resonance goes to zero in the thermodynamic limit L → ∞ [42]. In the following, we assume that the disorder realization under consideration does not feature such accidental resonances; within the k-particle localized regime of parameter space, this assumption holds with probability 1 in the thermodynamic limit. [44] As for the k = 1 (i.e., single-particle) special case [30,31], we expect that k-particle localization in the bulk can coexist with delocalized edge states [42]. A detailed study of the interplay between bulk localization and delocalized edge states in the case of full MBL is left for future work; some aspects have been discussed in Refs. 37 [52] For a finite system, the fact that m p α 1 ...α is exponentially insensitive to the details of the system far away from the plaquette p means that it may only change by an amount of order e −L/ξ when the system size is increased. This implies that the sum Σα 1 ...α m p α 1 ...α is given by its value in the thermodynamic limit, up to a correction of order e −L/ξ . [53] Here the region of support is understood as the region of the lattice where the particle density is significant in the state |ψn . See Appendix B for further details. [57] To see this, note thatnα|ΨR = (1 − fαf † α )|ΨR . The operator f † α is a polynomial in {cα} and {c † α }, where each term has the net effect of creating one fermion in the region around LIOM a. Since all sites near the LIOM a are occupied for the state |ΨR , f † α |ΨR = 0, and thuŝ nα|ΨR = |ΨR . [58] To see that the sum in Eq. (23) converges, note that the coefficient mα 1 ...α is exponentially suppressed in d/ξ, where d is the distance from any of the LIOM centers rα 1 . . . rα to the plaquette p. The number of distinct LIOMs whose centers are located within a radius ξ l from the plaquette p is of order ξ 2 l /a 2 , where a is the lattice constant in the system. Therefore, the coefficient m p α 1 ...α vanishes exponentially when ξ 2 l /a 2 . Recalling that µ ≡ Σα 1 ...α m p α 1 ...α must take integer value for each , we thus conclude that µ equals zero when ξ 2 l /a 2 . [59] More precisely, the initializations were divided into three classes, that contained initializations where the two particles were located on the same site, adjacent sites, and all other "on-site" initializations, respectively. The average magnetization was found through the sum of the obtained mean values of M within each class, weighted according to the number of states in the class. In this appendix we establish that the time-averaged current that passes through a cut C between two plaquettes p and q is determined by two quasilocal operators, m p andm q , with support centered at p and q, respectively [see Eq. (4) and Fig. 4]. By considering two plaquettes separated by a distance much longer than the localization length, this provides a prescription for uniquely identifying the magnetization density operatorm p (up FIG. 4. a) Schematic depiction of the argument showing that time-averaged current through a cut C between to plaquettes p and q only depends on the cut's two end-points. Specifically, since there can be no accumulation of charge over time in the region between the cuts C and C , the same current must pass through the two cuts, and thusĪC =Ī C for any two cuts C and C between the plaquettes p and q. b) The vanishing divergence of current implies thatĪC pq +ĪC qr =ĪC pr . to exponentially small corrections in the distance, which can be of order the system size). We recall from the main text that the operator corresponding to current through the cut C is given by where I b denotes the bond current operator on bond b, and the sum runs over all bonds that cross the cut C. The goal of this Appendix is to find the time-averaged expectation value of the current, I C , resulting from some given initial state |ψ . As in the main text, we use O ≡ lim τ →∞ 1 τ τ 0 dt ψ(t)|O(t)|ψ(t) . The timeaveraged expectation value of the current I C may equivalently be computed in the Heisenberg picture as I C = ψ|Ī C |ψ , where |ψ denotes the initial state of the system. Here, as in the main text, for any Schrödinger picture operator O(t) [such as I C (t)],Ō denotes the timeaverage of the current I C in the Heisenberg picture, The time-averaged current operatorĪ C is thus obtained by transforming the time-dependent operator I C (t) in Eq. (A1) with evolution operator U (t), and integrating over time as in Eq. (A2). To explore the properties ofĪ C , we consider the timeaveraged current for a different cut, C , between the same two plaquettes p and q, see Fig. 4a. We note that I C (t) − I C (t) =Ṅ R (t), where N R measures the number of particles in the region R between cut C and C (shaded region in Fig. 4). Importantly, since N R is bounded by the number of sites in the region R, the long-timeaveraged value of Ṅ R must vanish. We thus conclude that I C = I C . Since this holds for any initial state |ψ , we conclude thatĪ As a next step, we note from Eqs. (A1)thatĪ C = b∈B CĪ b , whereĪ b denotes the time-averaged current on bond b [see Eq. (A2)]. We note that the operator I b (t) is local, with support only on the sites connected by the bond b. For many-body localized systems, this implies that the operatorĪ b is a localized integral of motion, with support within a distance ∼ ξ l from the bond b, up to an exponentially small correction [50]. Hence, I C is given by a sum of terms, each of which only has support within a region of radius ξ l , centered at a point along the cut C. The requirements thatĪ C is given by a sum of local terms as described above, while at the same time taking the same value for all cuts between plaquettes p and q [Eq. (A3)], significantly constrains the form thatĪ C can take. In particular, this implies thatĪ C = I(p, q), where the operator I(p, q) only depends on the locations of the two plaquettes p and q (and not on the details of the cut C). Moreover, for any cut between plaquettes p and q, I(p, q) is given by a sum of terms which only have support in a region of width ξ l around the cut. For any site located a distance larger than ξ l from both plaquettes p or q, we can find a cut that remains separated from the site by a distance larger than ξ l . Therefore the support of operator I(p, q) can only include sites within a localization length of the endpoints p and q. Hence, we write: where A 1 (p, q) has its full support within a region of width ξ l around plaquette p, and A 2 (p, q) has support around plaquette q. The operators A 1 (p, q) and A 2 (p, q) depend only on the locations of plaquettes p and q, respectively. By letting the cut from p to q go through an arbitrary plaquette r on the torus (see Fig. 2b), we conclude from the arguments above the I(p, r) + I(r, q) = I(p, q). This implies A 1 (p, r)+A 2 (p, r)+A 1 (r, q)+A 2 (r, q) = A 1 (p, q)+A 2 (p, q). (A5) The only terms on the left hand side with support near plaquette r are the terms A 2 (p, r), and A 2 (r, q), while none of the terms on the right-hand side have support near plaquette r. We thus conclude that A 2 (p, r) = −A 1 (r, q) for any choice of two plaquettes p and q. Hence we may write A 1 (r, q) = A(r), and A 2 (p, r) = −A(r) for some function A(r) which only depends on the location of plaquette r and has its full support near plaquette r. Using this in Eq. (A4), we find Identifying A(p) =m p , we thus conclude that Eq. (4) holds. Appendix B: Derivation of Eq. (10) Here we derive Eq. (10), which is used to establish the integer quantization of the topological invariant z . To recapitulate, we consider a k-particle localized system, where k may be infinite in the case of full MBL. For a given ≤ k, we consider the -particle Floquet eigenstates of the system, {|ψ n }, with corresponding quasienergies {ε n }, and letε n denote the perturbed quasienergy corresponding to ε n when the weak uniform magnetic field B 0 = 2π/L 2 is inserted that results in one flux quantum piercing the torus (see below for details). The goal of this Appendix is to establish two results. First, we show that for each -particle Floquet eigenstate, |ψ n ,ε n = ε n − B 0 ψ n |M |ψ n + O(L −5/2 ). (B1) Here, and in the remainder of this Appendix, O(L −p ) indicates a correction which goes to zero at least as fast as L −p [64]. (I.e, in the following, we only indicate how rapidly corrections decrease with system size.) Secondly, we show that, when summed over all -particle Floquet states, the corrections of order L −5/2 in Eq. (B1) approximately cancel out, yielding a net correction which is exponentially suppressed in system size: where O(e −L/ξ ) likewise indicates a correction that goes to zero as e −L/ξ in the thermodynamic limit. Eqs. (B1) and (B2) implicitly require that, for each quasienergy level ε n of the (unperturbed) zero-flux system, it should be possible to identify a unique quasienergy levelε n of the (perturbed) one-flux system which satisfies Eq. (B1). In Sec. B 4 below, we confirm that such a complete one-to-one identification is possible for all but a set of disorder realizations which has measure zero in the thermodynamic limit. As noted in the main text, Eq. (B2) does not follow trivially from first-order perturbation theory in the weak magnetic field B 0 : under a continuous perturbation of the system, the system's quasienergy spectrum undergoes exponentially many avoided crossings due to resonances between many-body Floquet eigenstates separated by a large distance in Fock space. Hence, first-order perturbation theory breaks down for the system. Instead, we establish Eq. (10) with an alternative approach, using the localization properties of the many-particle Floquet eigenstates. In order to follow this approach, we use a succession of auxiliary results which are not discussed in detail in the main text, but are crucial for the proof of Eqs. (B1) and (B2). The line of arguments proceeds as follows: we first show explicitly how the uniform magnetic field B 0 can be implemented in the system (Sec. B 1). Subsequently, in Sec. B 2 we show that, for a given finite region S of the lattice, it is always possible to choose a gauge where the HamiltonianH of the one-flux system resembles the Hamiltonian H of the zeroflux system locally within S, and likewise for the Floquet operatorsŨ and U (Sec. B 3). Using this result, we demonstrate in Sec. B 4 that the Floquet eigenstates and quasienergies, {|ψ n } and {ε n }, are robust to the perturbation caused by inserting of the weak uniform magnetic field B 0 , such that the one-to-one identification described above is possible. From these auxilliary results, we prove Eq. (B1) in Sec. B 5, and finally use Eq. (B1) along with the LIOM structure of the system to establish Eq. (B2) (Sec. B 6). For the sake of brevity, throughout this Appendix we will work with a fixed degree of localization and particle number, unless otherwise noted. Thus, in the following, k and are fixed constants that refer to the system's degree of localization and to the number of particles in the system, respectively. We take ≤ k in the discussion below. Implementation of magnetic flux Here we discuss how the magnetic flux is implemented. The system we consider consists of interacting fermions on a lattice with the geometry of a torus, of dimensions L × L. The Hamiltonian of the system (in the absence of a flux) takes the form whereĉ i annihilates a fermion on site i in the lattice. Here the first term contains both hopping and on-site potentials, including disorder, with J ij (t) = J * ji (t), while the term H int accounts for interactions. We allow both parts of the Hamiltonian to be time-dependent, with periodicity T . To simplify the discussion, we consider the case of a square lattice model with nearest-neighbour hoppings, and a density-density interaction described by H int = 1 2 i,jρ iρj V ij (t), whereρ i =ĉ † iĉ j and V ij (t) = V ji (t) is real. In the general case of a quasilocal Hamiltonian, the results below can also be derived using similar arguments. In this subsection we are interested in finding the HamiltonianH(t) of the system when the uniform magnetic field B 0 = 2π L 2 is inserted, corresponding to one flux quantum through the surface of torus. Having assumed H int (t) to consist of density-density interactions, only the first term in Eq. (B3) is affected by the magnetic flux. The HamiltonianH(t) thus takes the form: Here, the Peierls phases {θ ij }, with θ ij = −θ ji , must ensure that the total phase acquired by traversing a closed loop on the torus is given by B 0 A S (mod 2π), where A S is the area enclosed by the loop [65]. There are (infinitely) many distinct configurations of the phases {θ ij } that satisfy this condition, corresponding to different choices of gauge for the one-flux Hamilto-nianH(t). As the starting point for the following discussion, we consider the following Landau-type gauge: let θ x i denote the Peierls phase for hopping along the bond in the positive x-direction from site i (and similarly let θ y i be the Peierls phase for hopping in the positive y-direction), and give them the values: Here x i and y i denote the coordinates of site i (defined with branch cut outside S 0 ), and δ ij denotes the Kronecker delta symbol, such that δ xi,L takes the value 1 if x i = L, while δ xi,L = 0 for all other values of x i . Recall that a is the lattice constant. The phases θ y i ensure that a trajectory encircling a plaquette acquires a phase of B 0 a 2 , if the trajectory does not cross the branch cut of the x-position operator between x = L and x = 0. The phase θ x i , which does not appear in the Landau gauge in an open geometry, is necessary to ensure that the phase is also given by B 0 a 2 (mod 2π) for trajectories encircling plaquettes across the branch cut. The goal of the following is to show that we can choose another gauge where B 0 only weakly perturbs the Hamiltonian within a particular finite region of the lattice, S, which consists of one or more non-overlapping diskshaped regions, S 1 , . . . S N , whose combined area, A S , is much smaller than L 2 . We reach such a gauge through the following transformation to the one-flux Hamiltonian with the gauge choice as prescribed in Eq. (B4): 0 y i for sites i within subregion S n , and (x (n) 0 , y (n) 0 ) denotes the center of subregion S n . In this case, one can verify that, for sites within subregion n the Peierls phases resulting from this transformation take the following values:, The later holds since the branch cut of the x-coordinate does not intersect S. Since S n has disk geometry and is centered around (x A S for sites i within subregion S n . Hence we confirm that the Peierls phases are all of order √ A S a/L 2 for bonds within S, and thus much smaller than 1 in the limit A S L 2 specified above. Response of the Hamiltonian An important result we will use extensively in the following is that, for large systems, the insertion of the uniform field B 0 only weakly perturbs the system, up to a gauge transformation. To see this, we consider the action of the perturbation induced by B 0 , δH(t) ≡H(t) − H(t) (in the particular gauge we consider), on a state |ψ with an arbitrary number of particles, where all particles are located in the finite region S that was introduced in the previous subsection. As a first step, we note that δH(t)|ψ = δH(t)P S |ψ , where P S projects into the subspace where all particles are located within S. Using thatĉ i P S = 0 if site i is located outside S, we find The Peierls phases {θ ij } are as given in Eq. (B5) above. Below, we establish an upper bound for the spectral norm [66] of δH(t)P S , δH(t)P S . To do this, we use that M ≤ Tr(M † M ), such that where K ij ≡ J ij (e iθij − 1), and we suppressed timedependence for brevity. Since θ ij = 0 for i = j, terms above are only nonzero when i 1 = i 2 and j 1 = j 2 . Thus, We now estimate the maximal scale of the right hand side above. We recall from the discussion in the end of Subsection B 1 that the Peierls phases {θ ij }, as given in Eq. (B5), are of order √ A S a/L 2 or smaller for bonds within the region S. This implies that the value of each non-vanishing term in the sum in Eq. (B8) is of order J 2 A S a 2 /L 4 or less, where J denotes the typical scale of the (off-diagonal) tunneling coefficients {J ij }. To estimate the number of non-vanishing terms in the sum we recall, from the assumptions made in the beginning of subsection B 1, that the tunneling coefficients J ij only couple nearest-neighbor pairs of sites in the lattice. Hence, for each choice of the index j, J ij may only be non-vanishing for four choices of the index j. These considerations show that there are only of order A R /a 2 non-vanishing terms in the sum above. Using that each non-vanishing term has norm of order J 2 A R a 2 /L 4 , we find that δHP S 2 A 2 S J 2 L −4 . Here a b indicates that a is smaller than b, or of order b. Thus we conclude that In the sense of the operator norm, the difference between the Hamiltonians with and without one flux quantum uniformly piercing the entire torus decays to zero with the inverse of the total system area, when acting on states confined to the region S, and with a judicious choice of gauge. a. Action on a localized state Using the above result, we now show that a gauge exists where δH is small when acting on states which are not strictly confined to the region S of the lattice, but rather only exponentially localized. Specifically, we consider a state |ψ , whose full support is exponentially confined to a region S which consists of one or more diskshaped subregions of radius r, with the probability of finding a particle a distance s from the center of the nearest subregion decaying as e −s/ξ l when s > r. To conveniently quantify the extent to which particles are confined within a subregion of the lattice, for each j = 1, 2 . . ., we let |ψ j denote the component of the wavefunction |ψ where the outermost particle is located in the distance interval between (j − 1)a and ja from the nearest subregion of S. Specifically, |ψ j ≡ (P j − P j−1 )|ψ , where P j denotes the projector onto the states where all particles are located within a distance ja from the center of the nearest subregion of S. From this definition one can verify that |ψ = ∞ j=1 |ψ j . Moreover, the using that P j P k = P min(j,k) , it follows that the components are mutually orthogonal: ψ j |ψ k = 0 for j = k. From the definitions above, the probability for finding finding a particle more than a distance ja from the center of R is given by ψ|(1 − P j )|ψ = ∞ j =j+1 ψ j |ψ j . Since the left hand side must be of order e −ja/ξ for ja > r, and each term in the right hand side is positive, we must have ψ j |ψ j e −ja/ξ l for j > r/a. We now use the above result to obtain a bound for the state δH|ψ . Inserting |ψ = ∞ j=1 |ψ j , and using P j |ψ j = |ψ j one can verify that |ψ = P R |ψ + j>r/a P j |ψ j , where P S ≡ P r/a denotes the projector into the subspace where all particles are located within the region S (for convenience we assume r to be an integer multiple of the lattice constant a). Using this result along with the triangle inequality and Eq. (B10), we hence obtain: The considerations from Sec. B 2 show that we may choose a gauge forH such that δHP S JA S /L 2 , and δHP j A 2 Sj J/L 2 for any choice of j, where A Sj ∼ (ja) 2 denotes the area of the region projected into by P j . Using that j>j0 j 2 e −j/k ∼ j 2 0 e −j0/k when j 0 k, one can then verify that where A S ∼ r 2 denotes the area of the region S. Thus, since r ξ l , we find Response of the Floquet operator We now show that, for any region S in the lattice that consists of one or more disk-shaped subregions, it is possible to find a gauge, the Floquet operators of the one-and zero-flux systems,Ũ (T ) and U (T ), have nearly identical actions states |ψ localized within S:Ũ (T )|ψ ≈ U (T )|ψ . Here the state is said to be localized within S if the probability of finding a particle a distance s from the center of the nearest subregion os S decays as e −s/ξ l for s > r, where r denotes the radius of S. First, we note that (U −Ũ )|ψ = (Ũ † U − 1)|ψ . This follows from the unitarity ofŨ , since for any state |Ψ , |Ψ = Ũ † |Ψ . Using thatŨ Using that |Ψ = Ũ † |Ψ along with the triangle inequality, we thus find We now use that U (t) is local at all times 0 ≤ t ≤ T , due to the finite Lieb-Robinson velocity v of the system. The locality implies that, for the state U (t)|ψ , the probability of finding a particle a distance s from the center of S decays exponentially when s r. Using the result in Eq. (B12) from the previous subsection, we thus find Using this in the inequality in Eq. (B14), we conclude Thus, (Ũ − U )|ψ JT A S /L 2 . The result in Eq. (B16) shows that, with a judicious choice of gauge, the Floquet operators of the one-and zero flux systems give nearly identical results when acting on a localized state. In this sense, the insertion of a uniform magnetic field B 0 only weakly modifies the Floquet operator for large systems. Response of Floquet eigenstates and quasienergy spectrum We now show that, in the subspace with k or fewer particles, the quasienergy spectrum and Floquet eigenstates of k-particle localized systems are robust to perturbations, and only weakly affected by the insertion of the uniform magnetic field B 0 . In this subsection, it is useful to use notation that relates the quasienergies and Floquet eigenstates to the LIOM decomposition in Eq. (1) (which is valid in the subspace of up to k particles, which we consider): in the following we thus let |Ψ α1...α ≡f † α1 . . .f † α |0 denote the Floquet eigenstate of the system for which only LIOMs α 1 . . . α take value 1 (see Sec. II A for definition off † α ), and let E α1...α denote the corresponding quasienergy. Using this cutoff length, we show below that for each finite ≤ k, where k denotes the system's degree of localization (which is infinite for MBL systems), the -particle Floquet eigenstates {|Ψ α1...α } ofŨ can be labeled such that, for each choice of LIOMs (identified by the LIOM indices α 1 . . . α ), Eq. (B17) thus shows that, in the thermodynamic limit, each eigenstate ofŨ is identical to an eigenstate of U , up to gauge transformation and a vanishingly small correction, while Eq. (B18) shows that their associated quasienergies similarly are identical up to a vanishing correction. This establishes the one-to-one correspondence of the quasienergy levels of the zero-and one-flux systems that we summarized below Eq. (B2). Due to the possibility that the field B 0 induces a resonance between two Floquet eigenstates of U , disorder realizations do exist where one (or more) of the eigenstates ofŨ is a significantly hybridized combination of two eigenstates of U . In this case, Eq. (B17) will hold for most but not all Floquet eigenstates of the system. However, as we show here, the set of disorder realization where such a resonance-induced breakdown of Eq. (B17) occurs has measure zero in the thermodynamic limit. In this way, Eqs. (B17) and (B18) hold for almost all disorder realizations, in the thermodynamic limit. To establish Eqs. (B17) and (B18), we first consider the case = 1 (i.e., we establish the relationships for each single-particle Floquet eigenstate). Subsequently, in a stepwise fashion, we generalize this result to states with particles, for each = 2, . . . k. a. Single-particle eigenstates Here we establish the relationships in Eqs. (B17) and (B18) for the single-particle case. We assume that k-particle localization is robust to perturbations, and thusŨ also describes a k-particle localized system (we assume k ≥ 1). Thus, in particular, each single-particle eigenstate |Ψ ofŨ has its full support within a finite disk-shaped region S of linear dimension d, with the probability of finding the particle a distance s outside S decaying as e −s/ξ l . Due to its finite region of support, each single-particle eigenstate ofŨ , |Ψ , may only overlap significantly with Floquet eigenstates whose corresponding LIOM centers are located within a distance ∼ ξ l from S. To exploit this fact, we introduce a system-size dependent length scale d ξ l , which acts as an effective length cutoff for the region of support of a LIOM. The length d must be much smaller than L, but can otherwise be taken to be arbitrarily large, as long as d/L vanishes in the thermodynamic limit. From the considerations above it follows that |Ψ only overlaps with the finite number Floquet eigenstates, |Ψ α1 . . . |Ψ α N 1 , whose LIOM centers are located within a distance d from S, (up to a correction exponentially small in d/ξ l : For the purposes of the following, it is convenient to order the indices n according to the value of the overlap, such Note that the sequence of LIOM indices α 1 . . . α N1 depends on the choice of |Ψ ; this dependence is taken to be implicit below, for the sake of brevity. We now show that |Ψ only overlaps significantly with one of the eigenstates |Ψ α1 . . . |Ψ α N 1 , while the total weight from all other eigenstates gives a negligible contribution. To show this, note that |Ψ αn and |Ψ are eigenstates of U andŨ , respectively, and hence whereẼ is the quasienergy associated with |Ψ . Since |Ψ is exponentially well localized within S, Eq. (B16) Combining these two inequalities with Eq. (B20), we find We now consider two implications of the above inequality. Firstly, Eq. (B19) implies | Ψ α1 |Ψ | 2 1/N 1 − O(e −d/ξ l ) (c.f. the labelling of the states {|Ψ αn }). Thus, Secondly, we note that, for a random choice of |Ψ , the typical spacing between the N 1 quasienergy levels {E n } is of order ∆E ∼ W/N 1 , where W denotes the width of the single-particle quasienergy spectrum (when the quasienergy spectrum has no gaps, W = 2π/T ). In this case, only one of the quasienergies {E αn } (namely E α1 ) is close enough toẼ for Eq. (B21) to allow a significant value of Ψ n |Ψ . Thus, |Ψ ≈ |Ψ 1 for a typical choice of |Ψ . We now prove that |Ψ ≈ |Ψ 1 for any choice of |Ψ in the system (except for a measure-zero set of disorder realizations in the thermodynamic limit). To establish this result, we first note We now establish a lower bound for |E αn − E α1 |, using the fact the quasienergy levels of nearby states E α1 and E αn repel each other, and that |Ẽ − E α1 | satisfies the bound of Eq. (B22). Specifically, note that the Floquet eigenstates |Ψ 1 and |Ψ n have their support within a distance d from each other. The quasienergies E α1 and E αn are hence subject to local level repulsion when the quasienergy difference δE ≡ |E n − E 1 | is much smaller than the scale of matrix elements between them with respect to the kinetic part of the Hamiltonian (i.e. δE Je −d/ξ l ). In the limit where δE Je −d/ξ l , the probability distribution p(δE) for δE should thus resemble the Wigner-Dyson distribution for the Circular unitary ensemble (CUE) [67]: Using the above result, we now compute the expected number of pairs of nearby single-particle eigenstates |Ψ αi and |Ψ αj in the entire system, for which |E αi − E αj | is smaller than some given (small) value δE 0 . Here "nearby" refers to the eigenstates |Ψ αi and |Ψ αj having their centers located within a distance ∼ d from each other, such that they may potentially overlap with the same eigenstate ofŨ . Noting that there are O(L 2 N 1 /2a 2 ) distinct pairs of nearby eigenstates (where a denotes the lattice constant), we have where we used N 1 ∼ (d/a) 2 . Thus, in the limit where δE 0 Je −d/ξ l , We recall we may take d arbitrarily large as long as d/L → 0 in the thermodynamic limit. In the following, it is convenient to let d scale with system-size as d ∼ We conclude that, in the thermodynamic limit, there are zero pairs of Floquet eigenstates |Ψ αi and |Ψ αj with LIOM centers within a distance d ∼ 1 2 ξ log(L/a) from each other whose quasienergies differ by less than a LIOMsn α andn β are located within a distance d from the centers ofñ 1 andñ 2 . As a result, there are only of order N 2 ∼ 2d 2 /a 2 2 choices of distinct LIOMs α, β for which |Ψ αβ can significantly overlap with |Ψ . Using the same arguments as for the single particle case (Sec. B 4 a) one can show that, for all but a measure-zero set of disorder realizations in the thermodynamic limit, there exists a unique two-particle eigenstate |Ψ αβ of U for each two-particle eigenstate |Ψ ofŨ such that (up to a gauge transformation) Separated LIOMs -Next, we consider the case where the two excited LIOMsñ 1 andñ 2 are separated by a distance ∆r larger than d. In this case, the LIOM structure of the Floquet operatorŨ [Eq. (1) in the main text] implies that, up to an exponentially small correction in the distance ∆r/ξ l , |Ψ may be written as a direct product of two single-particle eigenstates |Ψ α and |Ψ β . Here α and β refer to the labeling of the single-particle eigenstates ofŨ that was established in the previous subsection. Letting S α and S β denote the two non-overlapping regions of linear dimension d where the states |Ψ α and |Ψ β respectively have their support (up to a correction exponentially small in d/ξ l ), we have [68]: |Ψ = |Ψ α Sα ⊗ |Ψ β S β ⊗ |0 + O(e −d/ξ l ). (B35) where we used ∆r > d. Here |Ψ S denotes the restriction of the state |Ψ to the Fock space of the region S (defined from the projection of |Ψ into the subspace with no particles outside region S). The state |0 refers to the vacuum in the complementary region to S α and S β . Since the two particles in the state |Ψ are separated by a distance much larger than d, the regions S α and S β do not overlap. We recall that Eq. (B17) was already proven to hold for the single-particle case. Thus |Ψ α (the eigenstate in the presence of one flux quantum piercing the system) is approximately identical to a single-particle eigenstate |Ψ α of the zero-flux system's Floquet operator U (for all but a measure zero set of disorder realizations). Specifically, up to a gauge transformation, |Ψ α = |Ψ α + O(L −2 ). The eigenstate |Ψ α moreover has its full support in the same region S α as |Ψ α , up to a correction exponentially small in d/ξ l . Letting V α be the unitary operator that generates the transformation to the gauge in which Eq. (B17) holds for |Ψ α , we have where we used that we may take d ∼ 1 2 ξ l log(L/a), such that the correction O(e −d/ξ ) scales with system size as L −1/2 in the thermodynamic limit. Using the relation (B36) for the states |Ψ α Sα and |Ψ β S β in Eq. (B35), we hence obtain |Ψ = V α V β |Ψ α Sα ⊗ |Ψ β S β ⊗ |0 + O(L −1/2 ). (B37) Due to the LIOM structure of the Floquet operator U (Eq. (1) in the main text), |Ψ α Sα ⊗ |Ψ β S β ⊗ |0 is identical to the Floquet eigenstate |Ψ αβ of the zero-flux system, up to a correction of order e −d/ξ l . Since the product of the two gauge transformations V α and V β is itself a gauge transformation, we thus conclude that, up to a gauge transformation: The two cases we considered above show that, in the thermodynamic limit, and for all but a measure zero set of disorder realizations, each two-particle eigenstate |Ψ ofŨ is identical to a unique eigenstate of U , up to a gauge transformation, and a correction of order O(L −1/2 ). We may thus label the two-particle eigenstates ofŨ such that Eqs. (B17) and (B18) hold with = 2, and for each choice of the LIOM indices α 1 and α 2 . c. -particle-eigenstates We finally consider the general case of an -particle eigenstate |Ψ ofŨ , where is smaller than or equal to the system's degree of localization, k. For this situation, we can apply the same structure of arguments as for the two-particle case: due to the LIOM structure of the one-flux Floquet operatorŨ , each -particle state is constructed by "exciting" LIOMsñ 1 . . .ñ . We split our line of arguments into two cases, depending on whether or not the LIOMsñ 1 . . .ñ can be divided into clusters separated from each other by distances greater than d. In the case where the excited LIOMs can be divided into clusters in the way above, |ψ can be written as a direct product of eigenstates ofŨ with fewer than k particles, up to a correction of order e −d/ξ l . Following the same line of arguments as for the analogous twoparticle case, the relationships (B17) and (B18) can then be demonstrated to hold for this class of eigenstates using the fact that Eq. (B17) and (B18) hold for eigenstates with fewer than particles. In the case where all LIOMs are located in the same cluster, we note that |ψ only significantly overlaps with eigenstates {|Ψ α1...α } where the centers of all the LIOMsn α1 . . .n α are located in the region S, consisting of all sites with a distance d from any of the excited LIOM'sñ 1 . . .ñ . There only exist a finite number of eigenstates N with this property. Specifically, N d 2 /a 2 counts the number of distinct configurations of k LIOMsn α1 . . .n α whose centers are located within S. Crucially, N only depends on the number of particles, , and d, and is independent of system size. Using the same arguments as for the single-particle case, we then find that, for all but a measure zero set of disorder realizations in the thermodynamic limit, there exists a unique eigenstate |Ψ α1...α of U such that (up to a gauge transformation), where, as we described in the beginning of this Appendix, O(L −p ) denotes term scaling with system size as L −p in the thermodynamic limit (see Footnote [64]). In addition, when the LIOMs are located within a distance d from the same point,Ẽ Thus, Eqs. (B17) and (B18) hold for the -particle case in the thermodynamic limit, for any = 1, . . . k. Relationship between magnetization density and quasienergy Having established the auxiliary results in Secs. B 1-B 4, we are now ready to prove Eq. (B1), which is the first main goal of this appendix. To recapitulate, we seek to show that, for each -particle Floquet eigenstate, |ψ n with quasienergy ε n , the associated quasienergy for the one-flux system,ε n (see Sec. B 4 for details), satisfies ε n = ε n + B 0 ψ n |M |ψ n + O(L −5/2 ), whereM denotes the time-averaged magnetization operator (see Sec. III B 1 of the main text), and, as in Sec. B 4 above, O(L −p ) denotes a correction of order λL −p or less, where λ is some system-size independent energy scale that does not play a role for our discussion. In this step of the derivation it is useful to define a region of support, S n , for each Floquet eigenstate |ψ n . Specifically, for each Floquet eigenstate, |ψ n , and for some length scale d L, we let S n denote the smallest region of the lattice that ensures the centers of all nonzero LIOMs in the state |ψ n , α 1 . . . α , are located within a distance d from the boundary of S n . The region of support S n may consist of one or several disconnected diskshaped subregions of linear dimension d, and has area A Sn ≤ π d 2 . As in Sec. B 4, when taking the thermodynamic limit L → ∞ in the following, we let d increase logarithmically with system size as d ∼ 1 2 ξ l log(L/a). To establish Eq. (B41), for a given Floquet eigenstate |ψ n , we letŨ be the one-flux Floquet operator in a gauge where Eq. (B16) holds within S n , and let |ψ n denote the eigenstate ofŨ corresponding to |ψ n through Eq. (B17). Noting that |ψ n and |ψ n are eigenstates of U andŨ , respectively, we have
24,809.4
2019-07-29T00:00:00.000
[ "Physics" ]
Neisseria Heparin Binding Antigen is targeted by the human alternative pathway C3-convertase Neisserial Heparin Binding Antigen (NHBA) is a surface-exposed lipoprotein specific for Neisseria and constitutes one of the three main protein antigens of the Bexsero vaccine. Meningococcal and human proteases, cleave NHBA protein upstream or downstream of a conserved Arg-rich region, respectively. The cleavage results in the release of the C-terminal portion of the protein. The C-terminal fragment originating from the processing of meningococcal proteases, referred to as C2 fragment, exerts a toxic effect on endothelial cells altering the endothelial permeability. In this work, we reported that recombinant C2 fragment has no influence on the integrity of human airway epithelial cell monolayers, consistent with previous findings showing that Neisseria meningitidis traverses the epithelial barrier without disrupting the junctional structures. We showed that epithelial cells constantly secrete proteases responsible for a rapid processing of C2 fragment, generating a new fragment that does not contain the Arg-rich region, a putative docking domain reported to be essential for C2-mediated toxic effect. Moreover, we found that the C3-convertase of the alternative complement pathway is one of the proteases responsible for this processing. Overall, our data provide new insights on the cleavage of NHBA protein during meningococcal infection. NHBA cleavage may occur at different stages of the infection, and it likely has a different role depending on the environment the bacterium is interacting with. Introduction Neisseria meningitidis, a gram-negative obligate human commensal typically residing in nasopharyngeal mucosa, is a pathogenic member of the Neisseria family and a leading cause of fatal sepsis and bacterial meningitis worldwide. Based on the immunologic reactivity of the capsular polysaccharides, 12 distinct serogroups have been defined (A, B, C, E, H, I, K, L, W, X, Y and Z), six of which cause life-threatening disease (A, B, C, W, X and Y) [1]. In non-epidemic settings, approximately 10% of healthy individuals at any time carry N. meningitidis in the upper airways [2]. Acquisition of the bacteria from a healthy carrier or an infected person occurs through close direct contact with respiratory droplets or secretions. PLOS Colonization of human nasopharyngeal mucosa by N. meningitidis is the first step in establishment of both carrier state and invasive meningococcal disease. Bacteria attach selectively to and enter non-ciliated columnar cells of the nasopharyngeal mucosa [3]. After adhesion, interaction of N. meningitidis with epithelial cell induces the reorganization of the cell surface. Microvilli of epithelial cells elongate surrounding the bacteria and eventually engulf meningococci by invagination of the cell membrane and vacuole formation [3]. Penetration of meningococci through the epithelial layer occurs by a transcellular route without disrupting the epithelium integrity [4; 5], and this is not an unusual event since bacteria are found in sub-epithelial tissue of healthy individuals [6]. While in asymptomatic carriers, bacteria that cross the epithelial barrier are eliminated; in susceptible individuals, bacteria survive and enter the bloodstream resulting in a systemic infection. Frequently, meningococci translocate from the bloodstream across the blood-brain barrier, proliferate in the cerebrospinal fluid, and cause meningitis. Neisseria Heparin Binding Antigen (NHBA), also known as GNA2132 (Genome-derived Neisseria Antigen 2132), is a surface-exposed lipoprotein specific for Neisseria, and it is one of the three main protein antigens of the Bexsero vaccine against N. meningitidis serogroup B. NHBA has been implicated in different steps of meningococcal pathogenesis, including bacterial adhesion to epithelial cells, biofilm formation, bacterial survival in the blood and vascular leakage [7; 8; 9; 10]. NHBA protein can be structurally divided into an N-terminal and a C-terminal domain separated by a conserved Arginine (Arg)-rich region, which is responsible for the protein binding to heparin and heparan sulfate proteoglycans and it is the target of several proteases [7; 10]. The proteolytic processing of NHBA protein results in the release of its Cterminal portion. Human proteases, including lactoferrin and kallikrein, cleave the protein downstream the Arg-rich region releasing a C-terminal fragment that does not contain the Arg-rich region [7; 11]. In a subset of N. meningitidis hypervirulent strains, meningococcal NalP protease cleaves NHBA protein upstream the Arg-rich region, generating a C-terminal fragment that contains the Arg-rich region; this fragment is referred to as C2 fragment [7]. Recombinant C2 fragment has been shown to enter endothelial cells and to alter the endothelial permeability in vitro. Reactive oxygen species production and phosphorylation/degradation of the adherens-junction protein VE-cadherin are involved in this latter process [9]. Since NHBA expression is upregulated at 32˚C [12], a temperature encountered during N. meningitidis initial colonization of the upper respiratory tract [13], we hypothesized that the C2 fragment could also alter the epithelial permeability and facilitate the traversal of N. meningitidis through the epithelium. To this end, we verified the effect of the C2 fragment on polarized Calu-3 cells, a cell line resembling the morphological features of a differentiated human airway epithelium. In line with previous studies showing that N. meningitidis traverses the epithelial barrier without disrupting the junctional structures [3; 4; 5], we observed no influence of C2 fragment on the cellular integrity. Unexpectedly, we found out that epithelial cells were able to process both C2 fragment and NHBA full-length protein. We identified the cleavage site of epithelial cell proteases within NHBA protein and finally we verified that the C3-convertase of the alternative complement pathway is one of the proteases responsible for this cleavage. Cell culture Calu-3 epithelial cells, derived from lung adenocarcinoma (HTB-55; ATCC), were cultured in T75 flasks, with DMEM:F12 supplemented with L-Glutamine (Gibco-Themo Fischer Scientific), 10% (v/v) fetal bovine serum (Gibco-Themo Fischer Scientific) and Penicillin-Streptomycin (Gibco-Themo Fischer Scientific), at 37˚C in 5% CO 2 . Polarization of Calu-3 cells was performed according to the method developed by Sutherland et al. [5]. Briefly, cell monolayers were grown on 0.33 cm 2 , 1μm-pore-size, BD Falcon cell culture inserts (BD Bioscience) containing polyethylene terephthalate membranes in 24-well plate. Calu-3 cells, between passages 4 and 8, were seeded at a density of 0.15 × 10 6 cells per transwell, onto the apical side of membranes that were previously coated with a solution of collagen type I from rat tail (10μg/cm 2 ; Sigma). Cell monolayers were maintained with 0.8 mL of culture medium in the apical chamber and 0.5 mL in the basolateral chamber. Cells were allowed to grow and differentiate for 5-6 days, with the media changed every second day. Cell polarity and tight junction barrier function were verified by measuring trans-epithelial electrical resistance (TEER) using an epithelial voltohmmeter (EVOM2; World Precision Instruments) attached to STX chopstick electrodes (World Precision Instruments). Cultures with TEER values of more than 1,400 O were retained for experimentation. On the previous day of each experiment performed with Calu-3 cells, or polarized Calu-3 cells, or cell derivate of these cells, serum concentration in the culture medium was decreased to 1% (v/v). On the experimental day, cell monolayers were abundantly washed with DMEM:F12 supplemented with L-Glutamine (Gibco-Themo Fischer Scientific), from now referred as DMEM/F12 medium, in order to eliminate any trace of serum prior to perform the experiment. Primary normal human bronchial epithelial (NHBE) cells, isolated from a single healthy donor (Lonza CC-2540S), were differentiated according to the manufacturer's instructions. Briefly, cells were expanded in T75 flasks, using bronchial epithelial basal growth medium (BEBM; Lonza) supplemented with the BEGM (bronchial epithelial cell growth medium) Bul-letKit (Lonza), as recommended by the supplier, at 37˚C in 5% CO 2 until~80% confluence and used between passages 1 and 3. Then, cells were dissociated using StemPro Accutase cell dissociation reagent (Life Technologies) and were seeded onto semipermeable membrane supports (12-mm diameter, 0.4-mpore size; Costar) that were previously coated with a solution of collagen type I from rat tail (Gibco) at a concentration of 0.03 mg/mL. Cells were seeded at a density of 0.1 x 10 6 cells per well using bronchial air-liquid interface (B-ALI) medium (Lonza) supplemented with the B-ALI BulletKit (Lonza). When confluence was reached, the apical medium was removed, and an air-liquid interface was established to trigger differentiation. Cells were maintained at the ALI for at least 28 days prior to use in biological assays, being the basolateral medium changed every second day, to ensure a differentiated cell population with a mucociliary phenotype. The apical side was rinsed with phosphate-buffered saline (PBS) every week to remove excess mucus production. Cell polarity and tight junction barrier function were verified by measuring trans-epithelial electrical resistance (TEER) using an epithelial volt ohmmeter (EVOM2; World Precision Instruments) attached to STX chopstick electrodes (World Precision Instruments). Cultures with TEER values of more than 1,200 O were retained for experimentation. Cell supernatant preparation Cell supernatants were recovered from Calu-3, or polarized Calu-3, or differentiated NHBE cells. For Calu-3 and polarized Calu-3 cells, cell supernatants were recovered after incubation of cells in DMEM/F12 medium during at least 8h or overnight, at 37˚C and 5% CO 2 . Cell supernatant were then collected and stored at 4˚C or -20˚C until use. For differentiated NHBE cells, PBS washes of the apical side routinely performed for removing excess of mucus were collected and stored at 4˚C or -20˚C until use. Bacterial strains, growth conditions and working seed preparation N. meningitidis strain M2934 is a clinical isolate from the United States of America [14], and it belongs to cc32 (ST-32) [15]. M2934 strain was cultured on GC agar plates (Difco) or DMEM/ F12 medium at 37˚C plus 5% CO 2 . Working seeds were prepared inoculating overnight colonies grown on GC agar plate in 25ml of DMEM/F12 medium contained in 125mL-Corning Erlenmeyer cell culture flask (Sigma). OD 600nm~0 .1 was used as start culture density. Liquid culture was grown at 37˚C plus 5% CO 2 , under aerobic condition (180 rpm) until OD 600nm1 . After adding glycerol (final concentration of 15% (v/v)), liquid culture was aliquoted in 1mL criovials and stored at -80˚C until use. For cleavage assay on live bacteria, 4 working seeds (OD 600nm~0 .6) were used to inoculate~20 ml DMEM/F12 medium contained in 125mL-Corning Erlenmeyer cell culture flasks (Sigma). Liquid cultures were grown at 37˚C plus 5% CO 2 , under aerobic condition (180 rpm). OD 600nm~0 .1 was used as start culture density. Liquid culture was grown at 37˚C plus 5% CO 2 , under aerobic condition (180 rpm) until OD 600nm~0 .5, and then used for experimentation. E. coli strains BL21 (DE3) (Invitrogen) were cultured at 37˚C in Luria Bertani (LB) agar plates, or LB broth, or using EnPresso B growth kit (Biosilta); when required ampicillin was added in the medium at a final concentrations of 100 μg/mL. Liquid cultures prepared with LB broth were grown at 37˚C, under aerobic condition (180 rpm); while liquid cultures prepared using EnPresso B medium were grown according to the manufacturer's instructions. Details of the strains used in this work are reported in S1 Text. SDS-PAGE and Western blot analysis Proteins were separated by SDS-PAGE electrophoresis using 4-12% or 12% polyacrylamide NuPAGE Bis-Tris Precast Gels (Invitrogen). For SDS-PAGE analysis, gels were stained with SimplyBlue Safe Stain (Invitrogen). For Western blot (WB) analysis, proteins contained in the gels were transferred onto nitrocellulose membranes. Western blots were performed according to standard procedures. NHBA full-length protein and its C-terminal fragments were identified with polyclonal mouse antisera raised against the recombinant NHBA full-length protein (working dilution 1:1,000) or against the recombinant C2 fragment (working dilution 1:1,000), respectively. An anti-mouse antiserum conjugated to horseradish peroxidase (Dako) was used as secondary antibody. Bands were visualized with Super Signal West Pico Chemiluminescent Substrate (Pierce) following the manufacturer's instructions. Densitometric analysis of protein bands was performed using Image J software. NHBA cleavage assay using recombinant proteins Both recombinant C2 fragment and NHBA full-length protein was used as substrates for the assay. Cell monolayers, cell supernatants, fractions of cell supernatants, plasma-purified kallikrein (Sigma), human sera (Complement Technology), purified complement components (Calbiochem) were used as protease sources for the assay. 0.5 μM or 5μM of recombinant protein was incubated at 37˚C with protease source or with DMEM/12 medium, as negative control, for various time intervals (45 min, 1 h, 2 h, 4 h, 24 h). For purified complement components, C3b (5 μg), factor B (fB, 10 μg), and properdin (P, 1 μg) were incubated for 35 min at room temperature in 20 mM Hepes (pH 7.5), 75 mM NaCl, and 5 mM MgCl 2 at a final molar ratio C3b:fB:P 1:4:0.7. Subsequently, 100 ng of factor D and 1 μg of NHBA protein were added, and the mixture was further incubated for 1 h at 37˚C. At each time point, samples were collected and the cleavage of substrates was evaluated by SDS-PAGE or WB analysis. When required, protease sources were pre-treated with protease inhibitors 30 min before adding the substrate. Protease inhibitors used in this work were the followings: EDTA (Sigma), Leupeptin (Sigma), Pepstatin A (Sigma), GI254023X (Sigma), E-64 (Sigma). NHBA cleavage assay using live bacteria. 5 mL of cell supernatant or DMEM/12 medium, as negative control, were incubated for 2 h, at 37˚C, 5% CO 2 , with N. meningitidis strain M2934. For each condition tested, 2 mL of liquid culture of M2934 strain grown until OD 600nm~0 .5 was used to prepare the inoculum. After 2 h of incubation, samples were centrifuged at 3,500xg, for 10 min in order to collect both bacterial pellets and supernatants. Bacteria were immediately processed for FACS analysis. Supernatants were filtered using 0.22 μm filter, concentrated using 10,000 MW Amicon Ultra Centrifugal Filters (Millipore), and then processed for WB analysis. FACS analysis Bacteria were washed and then incubated for 1 h, at room temperature (RT), with polyclonal mouse antisera raised against the recombinant C2 fragment (working dilution 1:500). After several washing steps, samples were incubated for 30 min, at RT, with Alexa Flour 488-conjugated goat anti mouse IgG secondary antibody (working dilution 1: 1,000; Life Technologies). All washing steps and antibodies dilutions were performed using 1% (v/v) bovine serum albumin (Sigma) in PBS. Labeled bacteria were washed and fixed for 1 h at RT, using 2% (v/v) formaldehyde (Carlo Erba Reagents) in PBS. Samples were analyzed with BD FACS Canto II system (BD Bioscience). Proteomic analysis of polarized Calu-3 supernatant For proteomic analysis, polarized Calu-3 supernatant was fractionated by anion exchange chromatography. Sample was dialyzed at 4˚C overnight against 50 mM Tris-HCl, pH = 8, using Snake Skin Dialysis Tubing, 3,500 MWCO, 22 mm (Thermo Scientific). Dialyzed sample was load on 1 mL HiTrap Q HP column (Ge Healthcare). Fractions were eluted increasing the ionic strength; a one-step NaCl gradient was performed using 50 mM Tris-HCl, 1 M NaCl, pH = 8. The AKTA FPLC system was used to control the NaCl gradient (from 0 M to 1 M NaCl). Protein fractions positive for proteolytic activity on NHBA protein or C2 fragment were pooled and precipitated in 10% (v/v) trichloroacetic acid and 0.04% (w/v) sodium dehoxicholate. Protein pellets were solubilized in 50 μl di 0.1% (w/v) RapigestTM (Waters, MA, USA) and 1mM DTT and 50 mM ammonium bicarbonate, boiled at 100˚C for 10 min. After cooling down, 1μg of LysC/trypsin mix (Promega) was added and the reaction was performed overnight. Digestions were stopped with 0.1% final formic acid, desalted using OASIS HLB cartridges (Waters) as described by the manufacturer, dried in a Centrivap Concentrator (Labconco) and resuspended in 100 μl of 3% (v/v) acetonitrile (ACNcan) and 0.1% (v/v) formic acid. An Acquity HPLC instrument (Waters) was coupled on-line to a Q Exactive Plus (Thermo Fisher Scientific) with an electrospray ion source (Thermo Fisher Scientific). The peptide mixture (10 μl) was loaded onto a C18-reversed phase column Acquity UPLC peptide CSH C18 130Å, 1.7μm 1 x 150 mm and separated with a linear gradient of 28-85% buffer B (0.1% (v/v) formic acid in ACN) at a flow rate of 50 μL/min and 50˚C. MS data were acquired in positive mode using a data-dependent acquisition (DDA) dynamically choosing the five most abundant precursor ions from the survey scan (300-1600 m/z) at 70,000 resolution for HCD fragmentation. Automatic Gain Control (AGC) was set at 3E+6. For MS/MS acquisition, the isolation of precursors was performed with a 3 m/z window and MS/MS scans were acquired at a resolution of 17,500 at 200 m/z with normalized collision energy of 26 eV. The mass spectrometric raw data were analyzed with the PEAKS software ver. 8 (Bioinformatics Solutions Inc., ON, Canada) for de novo sequencing, database matching and identification. Peptide scoring for identification was based on a database search with an initial allowed mass deviation of the precursor ion of up to 15 ppm. The allowed fragment mass deviation was 0.05 Da. Protein identification from MS/MS spectra was performed against NCBInr Homo sapiens (Human) protein database (112,970,924 protein entries; 41,399,473,309 residues) combined with common contaminants (human keratins and autoproteolytic fragments of trypsin) with a FDR set at 0.1%. Enzyme specificity was set as C-terminal to Arg and Lys, with a maximum of four missed cleavages. N-terminal pyroGlu, Met oxidation and Gln/Asn deamidation were set as variable modifications. Intact mass measurement In order to evaluate the cleavage site of epithelial cell proteases, the intact mass of His-tagged C-terminal portion of NHBA and C2 recombinant form were recovered with Ni 2+ IMAC affinity enrichment after incubation with polarized Calu-3 cells. The acidified protein solutions were loaded onto a Protein Microtrap cartridge (from 60 to 100 pmols), desalted for 2 min with 0.1% (v/v) formic acid at a flow rate of 200 ml/min and eluted directly into the mass spectrometer using a step gradient of acetonitrile (55% (v/v) acetonitrile, 0.1% (v/v) formic acid). Spectra were acquired in positive mode on a SynaptG2 HDMS mass spectrometer (Waters) equipped with a Z-spray ESI source. The quadrupole profile was optimized to ensure the best transmission of all ions generated during the ionization process. Mass spectra were smoothed, centroided and deconvoluted using MassLynx vers. 4.1 (Waters). Proteases secreted by epithelial cells process NHBA-derived C2 fragment Recombinant NHBA-derived C2 fragment exerts a toxic effect on endothelial cells altering endothelial permeability [9]. The recent finding that NHBA protein is more expressed and cleaved at lower temperatures than 37˚C [12], which are temperatures encountered during N. meningitidis initial colonization of the upper respiratory tract, led us to hypothesize that C2 fragment could also alter the epithelial permeability and facilitate the traversal of N. meningitidis through the epithelium. To test our hypothesis, we evaluated C2 fragment activity on polarized Calu-3 cells cultured on collagen-coated permeable membrane in cell culture inserts. Recombinant C2 fragment was added in the apical chamber and cellular integrity was assessed by measuring the permeability of epithelial cell monolayers to different fluorescent probes (BSA-FITC and Dextran-Texas Red) previously added in the apical chamber. No influence of the C2 fragment on permeability was observed comparing basal chamber fluorescence of treated monolayers with respect to untreated monolayers (S1 Fig). Consistent with this observation, organization of tight junction structures (TJs) was not altered by C2 fragment treatment, as revealed by the correct distribution of ZO-1 protein, a specific marker for TJs integrity (S1 Fig). Surprisingly, when we checked the stability of the C2 fragment during permeability assays by performing a Western blot analysis on treated cell supernatants, a second shorter band was observed ( Fig 1A), indicating that epithelial cells were able to process the protein during the assays. This cleavage increased in a time-dependent manner. Of note, supernatants of epithelial cells alone were able to process the C2 fragment when incubated with the protein (Fig 1B), suggesting that cellular proteases responsible for the cleavage were secreted by epithelial cells. Epithelial cell proteases eliminate the C2 fragment Arg-rich region To validate our data on a more physiological system, we assessed the cleavage of both C2 fragment and NHBA protein using differentiated primary Normal Human Bronchial Epithelial (NHBE) cells. This primary cell line, grown on collagen-coated permeable membranes in cell culture inserts, develop a pseudo stratified epithelium composed of basal, ciliated and non-ciliated goblet cells [16]. In addition, it allowed us to exclude the possibility that the cleavage we detected was the result of contamination by serum proteases, since NHBE cells were cultivated and differentiated with serum-free medium. Similarly to what we observed with polarized Calu-3 cells, both the C2 fragment and the NHBA protein were cleaved when incubated with differentiated NHBE cells (Fig 2) or with NHBE apical washes (S2 Fig). For both proteins, the cleavage increased overtime and resulted in the generation of a C-terminal fragment that is devoid of the Arg-rich region epitope (Fig 2B and 2D), thus eliminating the domain essential for C2-mediated toxic effect. NHBA cleavage by epithelial cell proteases occurs on live bacteria To verify our in vitro observations with recombinant proteins also on native protein expressed on live bacteria, we used natural strain M2934 expressing NHBA peptide 5, which is not cleaved by meningococcal proteases (data not shown). Thus, this strain allowed us to focus exclusively on NHBA cleavage by epithelial cell proteases. Bacteria were incubated either with Calu-3 cell supernatant or with cell medium, as negative control. After 2h, samples were stained with an antibody directed to the C-terminus of NHBA conjugated to FITC, and the percentage of bacterial cells exposing the C-terminal domain of NHBA on their surface was measured by flow cytometry analysis. As shown in Fig 3A, we observed a negative shift in fluorescence intensity when we compared bacteria incubated with epithelial cell supernatant to bacteria incubated with medium, indicating that treatment with cell supernatant led to a decrease of C-terminal domain on surface-exposed NHBA proteins in M2934 strain. This reduction was statistically significant (Fig 3B). The evidence that epithelial cell supernatant cleaved NHBA protein on live bacteria was further demonstrated by Western blot analysis. Supernatants derived from bacteria incubated for 2 hours with epithelial cell supernatant or medium were collected, concentrated and tested for the presence of NHBA C-terminal fragment. As shown in Fig 3C, NHBA C-terminal fragment was only detected in the concentrated supernatant of bacteria treated with epithelial cell supernatant, corroborating that the removal of NHBA C-terminal domain from bacterial surface by epithelial cell proteases resulted in the accumulation of C-terminal fragment into the supernatant. Cleavage site of epithelial cell proteases is located immediately downstream the Arg-rich region In a first effort to define the site within the C2 fragment or the NHBA protein where epithelial cell proteases cleave, we examined if the presence of the Arg-rich region was necessary for the cleavage. To this end, we used a recombinant NHBA mutant protein [7], wherein all arginines of the Arg-rich region were substituted with glycines (mRR). After incubating the protein with polarized Calu-3 cells, the cleavage was assessed at different time points by Western blot analysis. At all-time points tested, the mRR mutant protein was not cleaved by polarized Calu-3 cell proteases (S3 Fig), indicating that the presence of Arg-rich region was necessary for the processing of NHBA by epithelial cell proteases. This result suggested also that the cleavage by epithelial cell proteases occurred somewhere in the arginine-rich region. To further define the cleavage site of epithelial cell proteases, the intact mass of C-terminal fragments generated by either the cleavage of C2 fragment or the cleavage of NHBA protein was measured by mass spectrometry. In both case, a mass of 20379 Da was detected corresponding to a C-terminal fragment that started at Ser14 or Ser281, respectively (Fig 4). Thus, we concluded that the cleavage site of epithelial cells was located immediately downstream the Arg-rich region, and corresponds to the cleavage site of human lactoferrin [7]. EDTA abolishes NHBA cleavage by epithelial cell proteases To identify the epithelial cell protease responsible for NHBA cleavage, we performed an ion exchange chromatography of polarized Calu-3 cell supernatant in order to enrich a fraction of cell supernatant with the protease of our interest. Fractions were eluted increasing the ionic (17 and 18) were selected for further analysis since they showed greater cleavage efficiency and likely contained more protease molecules of our interest. These fractions were pooled, subjected to trypsin digestion and analyzed by mass spectrometry. From the list of proteins identified by mass spectrometry (S1 Table), we selected proteins annotated as proteases and secreted by cells. Six protease candidates were identified and they are listed in Table 1. Specific protease inhibitors were used to identify the protease responsible for the cleavage of C2 fragment and NHBA full-length protein. The following specific inhibitors were chosen and tested at several concentrations: GI254023X [17], Pepstatin A and E-64 (S5 Fig). With this approach, we ruled out the possibility that ADAM9, Cathepsin D and L1 could be the proteases responsible for the cleavage, since the cleavage was still occurring. EDTA, a chelate agent of divalent and trivalent positive ions (i.e. Ca 2+ , Mg 2+ , Zn 2+ , Fe 3+ ), was the only inhibitor able to block the activity of the epithelial cell protease of our interest (Fig 5). A concentration range of 1-10mM of EDTA was tested, and at 5mM the cleavage of epithelial cell supernatant was completely inhibited. From the list of candidates, the protease activity dependent on positive ions, and that could be affected by EDTA treatment, was the human complement factor B. Complement factor B in the presence of Mg 2+ interacts with the hydrolyzed form of complement C3 (C3(H2O)) or C3b. This complex is then recognized by complement factor D or kallikrein that cleaves and activates factor B, and thus generates the C3-convertase of the alternative complement pathway [18; 19]. Human complement component C3 was also identified by mass spectrometry in the selected fractions of polarized Calu-3 cell supernatant (S1 Table). The expression of both complement component C3 and factor B by epithelial cells was also verified at the mRNA level (S6 Fig). Alternative pathway C3-convertase processes NHBA protein To further confirm that C3-convertase of the alternative complement pathway was a protease responsible for the processing of C2 fragment and NHBA full-length protein, we decided to use Normal Human Serum (NHS) since in NHS complement components are highly concentrated and it is where they play their main physiological role. The cleavage was tested in three different conditions for which the C3-convertase of the alternative complement pathway could not be formed: (i) NHS was treated with EDTA in order to sequestrate Mg 2+ and prevent the interaction between factor B and C3(H2O); (ii) NHS was depleted of factor B; (iii) NHS was depleted of factor D in order to prevent the activation of the C3(H2O)B complex. Regarding to NHS, in all the three conditions tested, in which the formation of C3-convertase was inhibited, we observed a reduction of the cleavage of both substrate proteins and the generations of two distinct smaller C-terminal fragments (Fig 6), compatible with the kallikrein cleavage pattern (S7 Fig). This reduction was statistically different (Fig 6C). Moreover, the processing of NHBA , as negative control. Polyclonal mouse sera against C2 fragment (A) or NHBA full-length protein (B) were used for blotting the membranes; C) Quantification of NHBA cleavage by densitometric analysis of western blot bands. Relative levels of NHBA cleavage were estimated by quantifying the ratio between N-terminal fragment/total amount of NHBA. Data show the means of results from three independent experiments. Error bars denote standard deviation. Statistical analysis was performed using an unpaired, two-tailed T-test (p<0.01; p<0.04); D) Western blot analysis of NHBA full-length protein incubated for 1h with purified complement components: C3b, factor B (fB) and factor D (fD). Properdin was included in the assay to stabilize the C3bBb complex [20]. Polyclonal mouse sera against NHBA full-length protein were used for blotting the membranes. https://doi.org/10.1371/journal.pone.0194662.g006 protein was observed after the incubation with purified complement components, previously mixed to generate in vitro the C3-convertase of the alternative complement pathway (Fig 6D). The sole presence of factor B seemed to be sufficient for the cleavage even though the reaction efficiency was low (Fig 6D, lane 7). Taking together, these results indicated that C3-convertase of the alternative complement pathway is another human protease responsible for the cleavage of C2 fragment and NHBA protein. Discussion In the present study, using an in vitro model of human airway epithelium, we demonstrated that C2 fragment did not alter the integrity of epithelial monolayers, and this was in agreement with previous findings showing that N. meningitidis traverses the epithelial barrier without disrupting the junctional structures [3; 4; 5]. Unexpectedly, we discovered that epithelial cells were able to process the C2 fragment converting it into a shorter fragment. Proteolysis of C2 fragment by epithelial cells eliminated the Arg-rich region, the putative docking domain responsible for the interaction of C2 fragment with host cells [9], likely preventing its biological activity. Therefore, this cleavage could represent a host defense mechanism against C2 fragment-induced toxic effect. Based on these findings we propose a model, depicted in Fig 7, to explain different outcomes as a result of the interaction between C2 fragment and host cells depending on the cell type. In a subset of hyper virulent strains, NHBA protein is cleaved by meningococcal proteases and the C2 fragment, a potential virulence factor, is released from bacterial surface. Acting as a first line of defense against meningococcal invasion, the airway epithelium constantly produces proteases that inactivate the C2 fragment activity. In contrast, under the in vitro conditions tested endothelial cells do not express proteases that are promptly able to process the C2 fragment, allowing the C2 fragment to exert its toxic effect that results in the perturbation of endothelial integrity. Investigating further the cleavage by epithelial cells using different models of study, we found that epithelial cells from the upper respiratory tract constantly secrete proteases responsible for a rapid cleavage of both C2 fragment and NHBA full-length protein. We also demonstrated that processing of NHBA protein occurred on live bacteria. Removal of NHBA Cterminal domain from bacterial surface resulted in the generation of an N-terminal fragment containing the Arg-rich region disclosed as new C-terminal domain, which might remain exposed on bacterial surface. Finally we demonstrated that, in addition to kallikrein and lactoferrin [7; 11], C3-convertase of the alternative complement pathway is a human protease able to process both C2 fragment and NHBA full-length protein. Formation of alternative pathway C3-convertase on the surface of N. meningitidis certainly occurred in human blood, and it might also occur during meningococcal colonization of the nasopharynx. In agreement with previous studies on expression profiling of human epithelial cells of the upper respiratory tract [21; 22; 23, 24], we reported, indeed, that Calu-3 epithelial cells expressed and secreted complement component C3 and Factor B. As a result of tick-over mechanism of C3 molecule [19], C3(H 2 O) can interact with factor B in presence of Mg 2+ and forms the C3-convertase of the alternative complement pathway. This complex needs to be activated by the cleavage of Factor D or kallikrein [18; 25]. In this study, neither Factor D nor kallikrein has been detected in the fractions of epithelial cell supernatants responsible for NHBA cleavage by mass spectrometry analysis. However, it is well known that tissue kallikrein 1 is expressed by tracheobronchial submucosal glands in human airways [26] and its production increases under inflammatory conditions, such as asthmas and viral infection [27; 28; 29]. Thus, C3-convertase of alternative pathway could be activated on mucosal surface at least under these conditions. With N. meningitidis, a concomitant inflammation of the airway epithelium, might lead to the formation of alternative pathway C3-convertase on bacterial surface. Once activated, it might cleave surface-exposed NHBA protein. Removal of NHBA C-terminal domain from bacterial surface by C3-convertase of the alternative complement pathway might affect the complement activation acting on two different steps of the complement cascade: (i) the antibody-mediated activation of the classical pathway and (ii) the amplification loop of the alternative pathway. First of all, if the C-terminal domain of NHBA protein is removed from the bacterial surface, antibodies raised against the C-terminal domain will not be able to initiate the complement cascade. This mechanism could be extended to all proteases that process surface-exposed NHBA protein, meningococcal and host proteases. In spite of this, antibodies directed to the N-terminal domain will be functional also after the processing of NHBA protein, since N-terminal fragment remains anchored to the bacterial surface. Using rabbit complement as source of complement, it has been shown that NHBA cleavage, by either meningococcal NalP protease or by human lactoferrin, does not significantly affect the NHBAmediated bactericidal activity, since bactericidal titers varied within one dilution [7]. However, further confirmations using human complement as source of complement need to be carried out, in order to exclude the possibility that removal of NHBA C-terminal domain is a human species-specific mechanism to avoid complement killing, as it has been discovered to be for fHbp and NspA proteins [30; 31]. Secondly, involvement of alternative pathway C3-convertase in the cleavage of NHBA protein from bacterial surface might interfere with the amplification loop of the alternative pathway. The amplification loop is the balance between two separate competing cycles: C3b-C3 convertase formation and C3b breakdown, and it contributes to the overall complement response pushing the complement cascade to the formation of C5-convertase that finally will lead to the lyses of bacteria trough the formation of MAC complex on bacterial surface [19]. Since the alternative complement C3-convertase has a very short half-life (90 seconds), if it is occupied with the processing of NHBA protein on bacterial surface instead of activate more C3 to C3b, the balance will likely shift in favor of the C3b breakdown cycle dampen the complement cascade. This might represent an additional mechanism used by N. meningitidis to evade the complement system and survive in human blood. Overall, these conclusive speculations raise the question whether the direct interaction between NHBA protein and alternative pathway C3-convertase may limit the functionality of Serum Bactericidal Antibody (SBA) assay, which is used for measuring the concentration of functional antibodies induced by NHBA antigen, leading to an underestimation of the real functionality of these antibodies. Since antibodies against NHBA have been shown to inhibit adhesion of N. meningitidis to epithelial cells [10], inhibition of meningococcal adhesion to epithelial cells by anti-NHBA sera can be taken in consideration as an additional assay for measuring their functionality. A recent report, showing that vaccination with Bexsero reduced meningococcal carriage rates of diverse circulating strains during 12 months after vaccination [32], highlights the fact that the Bexsero vaccine likely confers protection against meningococcal disease acting at different stages of the infection, including colonization. Antibodies induced by NHBA antigen may contribute to protection mainly at this initial stage of the infection by inhibiting colonization of NHBA-expressing bacteria [10]. Therefore, it will be important to evaluate the contribution of NHBA antigen in vaccine efficacy at the level of meningococcal colonization, and not only for its ability to induce bactericidal antibodies. Supporting information S1 Fig. C2 Fragment does not impair the integrity of the epithelium. A) and B) Recombinant C2 fragment does not alter the permeability of the epithelium. In the permeability assay, polarized Calu-3 cells were cultured on collagen-coated permeable membrane in cell culture inserts. Recombinant C2 fragment was added in the apical chamber and cellular integrity was assessed by measuring the passage of a fluorescent probe, previously added in the apical chamber, into the lower chamber, at various time intervals. Two different probes with different molecular weights were used: BSA-FITC (A) and 10,000 MW Dextran-Texas Red (B). TcdA, Clostridium difficile toxin, was included in the assay as positive control. The permeability of cell monolayers for a probe was quantified calculating the Δ mean of fluorescent intensity normalized versus pre-treatment values. C) Recombinant C2 fragment does not alter the organization of tight junction structures (TJs). Immunofluorescence analysis of ZO-1 protein distribution, a specific marker for TJs, (red staining) in C2 fragment treated (upper panel) or untreated (lower panel) polarized Calu-3 cells was performed. Cell nuclei were detected with DAPI (blue staining). No significant changes in ZO-1 distribution were observed indicating that TJs integrity was preserved.
8,331.4
2018-03-26T00:00:00.000
[ "Biology" ]
The Common Orientation of Community Psychology and Wonhyo’s Thought: ‘One Mind’, ‘Harmonizing Disputes’ and ‘Non‑hindrance’ in Focus : This study aims to relate the emerging field of community psychology with the philo‑ sophical thoughts of Wonhyo, a prominent figure in Korean Buddhism, from the aspect of their common orientation, to explore the development of both Buddhist philosophy and psychological research. The integration of modern psychology and Buddhist theory has only recently begun. In community psychology, there is a continuous need for the complementation of theory and case stud‑ ies, and within Buddhism, there is a need to academically and popularly expand the advantages of Buddhist teachings. Furthermore, this research is believed to significantly contribute to the theory and practice of community problem‑solving, which modern society demands. The characteristics of community psychology that differ from previous psychological research are twofold. First, it conducts a balanced examination of individuals and structures, moving away from the individual‑ centric focus of traditional psychology. Second, it emphasizes practice beyond theory, diverging from the theory‑heavy focus of prior studies. Wonhyo’s philosophy is particularly well‑suited to these characteristics. In the discussion, the theoretical contributions of Buddhism to community psychology are examined, based on Wonhyo’s philosophy, with a focus on the two main features mentioned above. This includes discussions on Buddhist introspection and the pursuit of enlighten‑ ment, grounded in a Mahāyāna perspective of the interdependent nature of the One Dharma world and the Bodhisattva path. The study further explores Wonhyo’s philosophy and practical examples pertinent to community psychology. Specifically, this examination focuses on the community’s psy‑ chological characteristics and practical examples demonstrated in Wonhyo’s concepts of ‘One Mind’, ‘Harmonizing Disputes (Hwajaeng)’, and ‘Non‑hindrance’, categorizing them into individual and community aspects. Through this research, it is confirmed that the personal cultivation and com‑ munity contributions of Buddhism are vividly present in Wonhyo’s theory and deeds. Particularly, Wonhyo’s philosophy and actions, embodying the benefits of humanistic and relational Buddhism, are expected to contribute significantly to the problem‑solving of modern society and the academic advancement in community psychology. Introduction Psychology studies human thoughts and behaviors.Among its branches, social psychology, which began with the publication of an introductory text by psychologist Mc-Dougall and sociologist Ross in 1908, scientifically investigates how people's thoughts, feelings, and behaviors are influenced by social contexts, namely the interplay between individuals and society or between individuals themselves.This field of study continues to evolve and branch out, including into areas such as community psychology.Community psychology was first discussed at a conference held in Swampscott, Massachusetts, USA, in 1965 and is currently a subject of active discussion.Unlike traditional psychology, community psychology emphasizes 'community' over the individual and 'practice' over theory, responding to the criticism of modern academia being fragmented and overly theoretical. In this context, the study of community psychology and Buddhism has significant academic contributions to make, particularly in two aspects.First, it offers an understanding and academic expansion of human psychology.Buddhism is generally associated with personal cultivation and has profound connections with human psychology, providing significant insights and assistance.Second, it contributes to the community in various aspects.Buddhism, focusing on enlightenment, may seem to maintain a distance from social communities due to its hermit-like tendencies.However, Buddhism also sustains communities through the establishment of temples and religious orders, contributing significantly to social communities as it does not exist in isolation from society.Buddhism acts as a bridge between the sacred and the secular, the temporal and the eternal, and between society and religion.Like other religions, Buddhism transcends its individualistic, transcendent focus on personal cultivation to make a clear contribution to the community as a religion. 1 Particularly, Korean Buddhism has historically not existed in isolation from the realities of community life.Therefore, the theories and practices of community engagement in Korean Buddhism could provide significant theoretical foundations and practical examples for community psychology.Until now, research that simultaneously addresses Korean Buddhism, community, and social psychology has been virtually nonexistent.One reason for this may be the perception of Buddhism as primarily focused on individual ascetic practices or as an inclination towards seclusion and detachment from worldly affairs.However, both the field of community psychology and the objectives pursued within Korean Buddhism share a common goal: the establishment of healthy communities.This paper seeks to explore this intersection, particularly focusing on the philosophy of Wonhyo (元曉, 617-686), to sustain communities through the long-standing practices and profound theories of Korean Buddhism and to theoretically advance the study of community psychology. Wonhyo is one of the most renown and influential monks in Korean Buddhism and East Asian Buddhist history.He is a critical figure who united two different sources of thinking: dharma nature (dharmatā 法性) and the characteristics of all phenomena (lak-ṣ aṇ a 法相) while playing a vital role in reconciling 'conflicting doctrines' into an embracing understanding, which came to be known as 'Hwajaeng' (harmonizing disputes 和諍).Through this philosophy, Wonhyo demonstrates how conflicting ideas are not actually in conflict as the conflicts gain reconciliation from within the problem itself, often postulated as 'One Mind' (il-sim 一心).Then, he attempts to resolve the dilemma with an ideal of Bodhisattva practice, where one returns to the conventional without residing in the ultimate.Likewise, while connecting his idea of One Mind as the ground of Hwajaeng with Bodhisattva practice for the people within a community, he embarks upon an exemplary community therapeutic action based on his idea of 'Non-hindrance' (muae 無礙).Through this exploration, this paper aims to shed new light on the contributions of Korean Buddhism, centered around Wonhyo; to community building; and to expand the scope of community psychology.This paper will primarily employ descriptive methodologies among the major research methods of community psychology, including natural observation, case studies, archival research, surveys, and psychological testing, with a particular focus on archival research, specifically a literature review.This selection is influenced by the nature of Buddhism, especially that associated with Wonhyo, as the subject of study.Although experimental methods and descriptive studies based on surveys and case studies are viable within the context of Buddhism, this paper will focus on the theoretical and practical aspects surrounding Wonhyo's writings.It will elucidate the relevance between Buddhism in general and community psychology, and subsequently concentrate on the points of connection with community psychology, particularly through Wonhyo's philosophy of the 'One Mind', 'Harmonizing Disputes', and 'Non-hindrance'. The Common Aims of Community Psychology and Buddhism Given the relatively short history of modern psychology research and the challenges in integrating Western academic disciplines and theories with Asian Buddhist thought, the theoretical study of Buddhism within the realm of community psychology remains underdeveloped.Given the research characteristics of community psychology, we can define it as follows: 'A discipline that strives to understand and enhance the quality of life for individuals, communities, and society through the concurrent implementation of research and practical action'. 2 Interestingly, these characteristics resonate with, and present a unique opportunity for Korean Buddhism, with its extensive historical background, to significantly contribute to the field of community psychology.This paper aims to explore the potential intersections between Korean Buddhism and community psychology, particularly focusing on the concepts of 'problem-solving within the community through a balanced contemplation of the individual and structure' and 'pursuing community change through practice', which are central to both Korean Buddhism, especially in the context of Wonhyo's thought and community psychology. Given the relatively short history of contemporary psychological research and the challenges in integrating Western academic and theoretical frameworks with Asian Buddhist thought, the theoretical examination of Buddhism within the realm of community psychology remains underdeveloped.In this regard, Korean Buddhism, with its long history, has significant contributions to make to community psychology.Among these, the exploration of Korean Buddhism in relation to community psychology begins with a focus on the key characteristics that define community psychology and are a feature of Korean Buddhism and Wonhyo's thought: 'problem-solving within communities through a balanced consideration of individuals and structures' and 'pursuing community transformation through practice'.In this respect, community psychology can be applicable to the study of Korean Buddhism in the context that this discipline 'endeavors to understand and improve the quality of life for individuals, communities, and society by conducting both research and practical action'.This characteristic is also found in the Korean Mahāyāna tradition, especially with the emergence of Wonhyo, who emphasized the concepts of One Mind, Harmonizing Disputes, and Non-hindrance in relation to the interdependent nature of reality and the Bodhisattva practice for the benefit of people and the world.Furthermore, just as community psychology recognizes the value of opposing viewpoints and divergent reasoning (Kloos et al. 2012, pp. 58-59), this paper aims to elucidate the significance of Wonhyo's approach of One Mind, two aspects of enlightenment and harmonizing opposing views.This approach can be applied to various contemporary issues, such as equality, social justice, equity, and individual and collective well-being, not only in Korea but also in a global context. While this paper has discussed several examples and possibilities from Buddhist and community psychology perspectives, such an approach serves as an exemplar for academic disciplines and religions that might otherwise tend towards excessive abstraction.Furthermore, it prompts individuals in contemporary society, often preoccupied with self-interest, to recognize the intrinsic connection between self and community.Through this understanding, humanity may be better equipped to address current global challenges such as the environmental problem and the potentially destructive implications of scientific and technological advancements, including weapons of mass destruction and artificial intelligence.Consequently, this approach not only fosters individual well-being but also promotes and sustains communal values and collective existence. Approaching Community Problems through a Balanced Consideration of Individuals and Structures While community psychology places emphasis on context, structure, and other such elements, it does not overlook the individual.Rather, it focuses on the influence of the community on the individual based on physical environments and interpersonal relationships, which are key concepts in community psychology. 3Community psychologists, emphasizing community, use the term 'context' instead of 'situation', commonly used by social psychologists.The meaning of 'context' in community psychology is as follows: Context compresses all structural influences that affect an individual's life into a single expression, including family and social relationships, neighborhood, school, religion and community organizations, cultural norms, gender roles, socioeconomic status, etc.Without adequately describing these structural influences, research and practice will not succeed, and this is referred to as the error of minimal context (Kloos et al. 2023, p. 35). As explained above, the meaning of context encompasses the character of the community, cultural atmosphere, social norms, and regulations discussed in terms of external forces in a more comprehensive aspect than merely 'situations'. 4In short, it 'refers to all aspects of the relevant setting, including cultural traditions and norms; the skills, goals, and concerns of the individuals; historical issues in that setting (e.g., prior experiences with similar innovations); and all the elements of community capacity' (Kloos et al. 2012, p. 334).Conversely, 'ignoring or discounting the importance of contexts in an individual's life is referred to as the error of minimal context' (Kloos et al. 2012, p. 10).Hence, community psychology focuses on the interplay between the individual and the situation, considering both aspects together in understanding the causes of certain events.Specifically, community psychologists aim to grasp the mutual influence between people and their context.Unlike traditional psychology, which concentrates on the individual's issues, this approach advocates for a comprehensive perspective. Indeed, this is closely related to the changes in modern society.That is, after the modern era, there has been a shift from a subject-centered perspective to one that goes beyond the subject.In this process, academic terms such as 'context' in community psychology and 'situation', 'structure', 'atmosphere', and 'field' in philosophy or sociology have emerged and been emphasized in contrast to the 'subject'.Although this trend focuses more on what surrounds the subject rather than the subject itself, it does not imply a disregard for the acting individual.Instead, it moves away from discussions that were solely focused on the individual, emphasizing a balanced consideration of 'individual' and 'structure'.(Kim 2022, pp. 46-57). Historically, it can be observed that communities have changed in accordance with situations and contexts that justify their actions.This implies that if situations and contexts are intentionally manipulated, communities and societies can be moved accordingly.In fact, the direction of change, whether right or wrong, and issues of justice and injustice are not of primary importance here.Anything is possible if it can be regulated by the situation and justified by the context, a fact that is often found in tragic events throughout history.Milgram's experiments starkly reveal this, showcasing a passive attitude where no one questions or resists in wrongful circumstances. 5 Situations and contexts not only affect the community but also individuals.Once humans are convinced of something, it becomes challenging for them to change their minds.Festinger, who introduced the concept of cognitive dissonance, suggested that inconsistency among beliefs or behaviors leads to psychological tension, making it difficult for convinced individuals to change.In a similar vein, Aronson's research into self-justification shows that when people act in ways that conflict with their beliefs, leading them to feel foolish, they tend to justify their actions to reduce dissonance.These observations highlight a common human tendency to insist on their viewpoint, regardless of its correctness, once their stance is established (Kloos et al. 2023, p. 35). This applies to the stance of community psychologists who theoretically study and academically practice community psychology.However, community psychology has yet to address these issues with the necessary attention.Consequently, within the field of community psychology, the following critique has emerged. Community psychologists believe that there is no value-neutral research.Research is always influenced by the researcher's values, biases, and the context in which the research is conducted.Therefore, when describing the results of research, one should not rely solely on the analyzed data but pay attention to values and context (Tebes 2017, pp. 21-40;Kloos et al. 2023, p. 59). Therefore, community psychologists need to continually 'reflect' on whether their research and activities align with their own and the community's values and whether they are objective. 6Thus, it can be summarized that an 'essential reflection' on the situation and context serves as a control mechanism to prevent community psychologists, as well as communities and individuals, from going astray. In this context, Wonhyo's Buddhist theory and praxis can contribute in the following ways: Firstly, Wonhyo seeks enlightenment through various ascetic practices such as śamatha, vipaśyanā, early Seon (Chan, 禪), and recitation.Enlightenment in Wonhyo ultimately connects with the verification of the impermanence of life.Buddhist practice in Wonhyo is an act of reflecting on oneself and the surrounding situation and context in a world characterized by impermanence.There have been practical applications of 'critical reflection (liberating insight)' in Wonhyo to real-world problems, finding certain solutions within the problem itself.For instance, through contemplating the origins of pain or dissatisfaction in everyday life, practitioners can find the actual problem lies in one's own greed, anger, or avidya (ignorance) caused by certain causes and conditions, i.e., internal or external contexts.Thereby, through contemplating the cohabiting characteristics of the problem in a certain context itself, one can return to one's One Mind, realizing that one's original enlightenment is not different from non-enlightenment in the midst of the arising and ceasing of everyday affairs. In this regard, we can also note the experience of community psychologists who introduced 'mindfulness', or 'essential reflection', a specific meditation technique developed from Buddhist critical reflection, to American police officers resulted in notable changes. 7 Essential reflection delves deeper by enhancing the theoretical approach to reflection itself.Scientists like Crick and Tononi viewed human emotions, memories, ambitions, consciousness, and free will as merely the result of vast networks of neuronal activity.Wonhyo and most Buddhists also see human emotions as phenomena arising from the combination of the five aggregates under dependent origination. 8Freud and Jung emphasized the importance of the unconscious or collective unconscious, and contemporary philosophers in psychoanalysis also highlight the potential and importance of the unconscious.However, modern neuroscience struggles to find clear distinctions between consciousness and unconsciousness. In contrast, Wonhyo frequently draws upon Yogācāra Buddhism, which emerged from the 3rd century onwards, equating consciousness (vijñāna) with the mind (citta) and mentation (manas), and adheres to the classification of eight consciousnesses.This system culminates in the concept of ālaya-consciousness (ālayavijñāna, or storehouse consciousness), which is understood as the subliminal yet foundational form of consciousness.Wonhyo follows the explanation of how seeds (potentialities) stored in the ālaya-consciousness manifest under certain conditions, influencing future seeds and conditions, thus interpreting specific phenomena or events within a community as the collective karma of individuals who performed similar actions in past lives.From this perspective, individuals and communities are seen as interconnected over time and space, influenced by de-pendent origination or mutual perfuming, offering a macroscopic and diachronic view of their interrelation. Naturally, this perspective is linked to Wonhyo's altruistic and sentient-being-oriented characteristics of bodhisattva practice, both in its theory and practice.Wonhyo aims to enlighten all sentient beings, in cooperation with individuals and communities, just like community psychologists aim to promote human equality and wellness, using institutional or self-aid community action.In this regard, Wonhyo manifests significant emphasis on the teaching and enlightenment of human beings with a humanistic orientation, which could be termed 'Humanistic Buddhism'. 9From the standpoint of Mahāyāna spirit, Wonhyo's Humanistic Buddhism focuses on the individual practitioner's path to enlightenment.However, such characteristics are not limited to the individual practitioner alone.Within the Korean Buddhist community, an individual practitioner bears Buddhist attributes or Mahāyāna bodhisattva precepts, being a follower of the community's discipline.From this viewpoint, Humanistic Buddhism, following the spirit of the Bodhisattva path, does not neglect the role and duties of the individual towards the community.Thus, the influence of Mahāyāna's humanistic spirit on the community is directly connected to the individual's cultivation, and this cultivation brings benefits and changes to the community.Practices such as contemplation, Seon meditation, or samādhi and vipaśyanā, enable 'critical reflection' on oneself and the world.Through personal practice, it aids in the purification of oneself and the community, thereby securing a vision to perceive reality. These humanistic, Mahāyāna Buddhist characteristics are well exemplified in Wonhyo's Buddhist theory and practice.Amidst turmoil, Wonhyo championed the fundamental practice of returning to the source of One Mind (歸一心源) for the benefit of all sentient beings (饒益衆生), aiming to bring true happiness to the people.As such, this personal practice of returning to one's origin is inseparable from the Bodhisattva practice (benefiting sentient beings) for the people.These characteristics, as demonstrated in Wonhyo's thoughts and practices, extend to the later features of Korean Buddhist practice, such as practicing together in one spirit.Unlike the Buddhist practices in China, Taiwan, or the West, Korean Buddhism often involves communal practice in the same space, whether it be Seon practice, meditation, gongan (koan 公案) and hwadu (話頭) practice, or temple stays.Activities such as temple maintenance, prayers, and almsgiving are also often conducted collectively.In this respect, the theory and practice of enlightenment in Korean Buddhism, initiated and propelled by Wonhyo contribute significantly to the balanced development of community psychology and offer potential solutions to community issues through a balanced examination of individuals and structures. Seeking Community Change through the 'Harmony of Theory and Practice' Community psychology has previously been mentioned as 'a discipline that seeks to understand and enhance the quality of life of individuals, communities, and society by integrating research and practical action'.From the phrase 'a discipline that seeks to enhance', it is evident that, unlike other fields, there is a clear intention to utilize theoretical foundations for practical application.Community psychology demands direct changes within communities.Thus, it focuses on observing contexts (structures; fields) rather than individuals alone, considers changes within communities beyond theoretical confines, and concentrates on communities as concrete fields of practice.Community psychology emphasizes practices that change relationships, environments, and structures of communities through individual changes.In pursuing community change through the 'harmony of theory and practice', community psychology shares many similarities with Buddhism, ep. with Wonhyo.The specifics are as follows: Firstly, it emphasizes practice based on the analysis of certain events or contexts.Community psychologists, who prioritize contexts and the relationships between individuals and communities, then have a participatory and practical orientation, resonating with the relational characteristics emphasized in 'Hwajaeng' based on the traditional Buddhist con-ception of dependent origination.Wonhyo viewed Hwajaeng as a process of reconciling and facilitating communication between conflicting viewpoints through several steps: (1) Identifying the conditions under which each perspective is established and understanding the causal relationships between these conditions.(2) Reflecting on the validity of the established conditions and their causal relationships. (3) Contemplating the 'partial validity and value' (一理, il-ri) inherent in each condition and its causal relationships.(4) Generating common meaning through the acceptance of these partial validities.This approach allowed Wonhyo to synthesize seemingly contradictory views by recognizing the contextual validity of each perspective and integrating them into a more comprehensive understanding of the context. A similar approach can be found in community psychology. (1) Social issues involve opposing viewpoints which can both be true (at least, both hold some important truth). (2) Recognizing important truths in opposing perspectives forces us to hold both in mind, thinking in terms of 'both/and' rather than 'either/or' (Kloos et al. 2012, p. 58). (3) On this line of reasoning, Rappaport advocated divergent reasoning for community psychology: identifying multiple truths in the opposing perspectives; recognizing that conflicting viewpoints may usefully coexist; and resisting easy answers; The best thinking about social issues takes into account multiple perspectives and avoids onesided answers.(4) Dialogue that respects both positions, rather than debate that creates winners and losers-can promote divergent reasoning (Kloos et al. 2012, p. 58). Likewise, by locating the conditions under which each truth is established and drawing validity through understanding the causal relationships between these conditions, Hwajaeng theorists and community psychologists do not flaunt their theoretical knowledge in this analytical and practice-oriented stance.That is, 'when conducting research, community psychologists do not consider themselves superior to or more knowledgeable than the community members.As experts, they provide knowledge but play a role in enabling the expression of the members' resources, strengths, and knowledge to be integrated into the programs' (Perkins et al. 2004, pp. 321-40). Secondly, the importance of context is emphasized.When someone falls while walking, people are more likely to first consider whether the individual was intoxicated or had a physical issue, rather than thinking about the road being uneven.However, community psychologists focus on the context in which behaviors occur.They go further to explore methods that can effectively prevent problems before they arise, rather than after.Through this approach, community psychologists are interested in alleviating human suffering and advancing social justice (Kloos et al. 2023, p. 27).This aligns with the Bodhisattva spirit in Mahāyāna Buddhism and Wonhyo, which seeks to break the cycle of individual and collective karma.Following this spirit, Wonhyo's philosophy sought to reconcile differences in perspective according to context, creating a common understanding that is accessible and beneficial to all, resonating with the Bodhisattva spirit.The positive impact of Buddhist values on Korean communities is grounded in such relational and humanistic thinking traditions, emphasizing context, as exemplified by Wonhyo's philosophy of the One Mind Dharma-realm (Dharmadhātu). 10 However, these traditions have not completely resolved real-world problems.In response to this demand, the emergence of relational Buddhism in contemporary Western countries (Kwee 2010, pp. 433-34) is well in tune with the approach of community psychology, respecting relations and contexts.Both relational Buddhism and community psychology assert that 'individuals are not isolated beings but should be understood within their relationships' or contexts (Kloos et al. 2023, p. 28).Practitioners engaging in both practice and Bodhisattva actions strive to 'understand and enhance the quality of life of individu-als, communities, and society by integrating research and practical action concerning the relationships among individuals, communities, and society' (Kloos et al. 2023, pp. 27-28). In community psychology, there is an emphasis on social explanations for human subjects within the frameworks of relational humanism and social constructionism.This also corresponds with Wonhyo's humanistic and relational Buddhist orientation, emphasizing the realization that individuals within the Dharma-realm are in a non-dual relationship with it, and stressing cooperative engagement between oneself and others, as well as between individuals and the community.Relational Buddhism focuses not just on the category of the individual, but on the life shared within the community and society at large.Humanistic thought also focuses on elevating the mundane life of individuals within the community to a life of enlightenment.Particularly, the relational Buddhist orientation characteristic of Korean Buddhism has significantly impacted communities through the harmony of theory and practice.This feature is evident in Wonhyo's philosophy of harmonizing disputes, the reformative monastic movements of the Goryeo (高麗) era, the righteous army movements during periods of war in the Joseon period, the independence movements against Japanese occupation, and the Buddhist modernism of Manhae Han Yong-un (萬海 韓龍雲, 1879-1944), among others.Especially, as seen in the secular precepts of Silla's Wonguang (圓光, 542-640) and the monk-soldiers and righteous armies of the Joseon dynasty, the Korean Buddhist community has been characterized by its active participation in addressing threats to its community, moving in tandem with society. Likewise, the common goals of Buddhism and community psychology have been explored through the dual aspects of 'individual and structure' and 'theory and practice' aimed at the development of communities.However, the influence of these two fields is not limited to these aspects alone.As Zimbardo pointed out, many people remain silent even when witnessing events within the community.Buddhism's pursuit of enlightenment questions the situation and context, and through the essential exploration of life, it can exert a controlling function over the domination and control by situation and context.Buddhism, through insights into situations and contexts characterized by interdependence or relationality, enables the maintenance and improvement of a desirable relationship with oneself and the surrounding environment.Particularly, the characteristics of humanistic and relational Buddhist orientation in Korean Buddhism, based on Wonhyo's philosophy of One Mind and harmonizing disputes, share the balanced view of community psychology, premised on the respect for relative, pluralistic values rather than absolute, authoritarian truths. Shared Orientations and the Philosophy of Wonhyo As outlined above, the philosophical characteristics of Wonhyo, a seminal figure in Korean Buddhism, are generally encapsulated by the concepts of 'One Mind', 'harmonizing disputes or conflicts', and 'Non-hindrance'.'One Mind' is presented as the fundamental nature of the human mind according to the Awakening of Faith in line with the concept of Tathāgatagarbha (Buddha-nature) and ālaya-consciousness. 'Harmonizing disputes' refers to the effort to reconcile the differences and contradictions among the diverse Buddhist doctrines that were introduced to Silla (新羅) in the 7th century.'Non-hindrance' reflects the egalitarian perspective evident in Wonhyo's Bodhisattva practice for the people and community. Diverse interpretations highlight the richness of Wonhyo's One Mind, Hwajaeng, and Non-hindrance thought, demonstrating its potential for multiple philosophical and practical applications in addressing conflicts and harmonizing disparate viewpoints. 11As most of the previous studies suggest, Wonhyo's approach of 'returning to the source of the One Mind (歸一心源)' became a dynamic force that could excellently dissolve and unite various conflicting arguments of those times within the single taste of enlightenment.With this enlightenment, he propagated to the people that the only real recourse to enter the domain of non-conceptual enlightenment and wisdom is none other than the experience of deep faith in Mahāyāna. 12In this way, Wõnhyo's Hwajaeng surely turned out to purport not just for theory but also for practice and experience in the One Mind in everyday life, represented by Buddha-nature that even commoners and men of base share.To be sure, his harmonizing concern is on elevating the conventional level to the religio-experiential dimension at the ultimate level, realizing the state of deep faith where there is no difference between the Buddha and the ordinary, and the noble and the commoners.In this sense, One Mind hermeneutics are not confined to the individual dimension of words and thoughts, but extends to their shared meanings in particular and social contexts, encompassing both conventional (community) practice and ultimate enlightenment, represented by two forms: initial and original enlightenment. In this context, Byung-hak Lee has evaluated that the concept of 'Integration of Two Enlightenments' (二覺圓通) in the Treatise of Vajrasamādhi Sūtra represents a logic of integrating initial enlightenment through the equality of original enlightenment.This idea was proposed by emerging Buddhist forces such as Daean (大安), Hyesuk (惠宿), and Hyegong (惠空), who focused on public propagation in opposition to the conservative monastic order centered on social status and precepts.Building on this perspective, the 'Integration of Two Enlightenments' concept in the Treatise of Vajrasamādhi Sūtra expresses the structure of the Awakening of Faith, where the Tathāgatagarbha covered by ignorance returns to the One Mind, through the concepts of 'one-flavor practice (一味觀行)' and 'ten-fold dharma gate (十重法門)'.This implies that while all sentient beings possess the same buddha-nature as 'one-flavor', they must practice various cultivation methods appropriate to their stage from the perspective of the 'ten-fold dharma gate'.In the Awakening of Faith, original enlightenment, signifying the equality of awakening, corresponds to 'one-flavor practice', while initial enlightenment, meaning the beginning of cultivation, corresponds to the 'ten-fold dharma gate', which presents suitable practices for the common people through various methods (Lee 2006, pp. 196-208).Thus, the bodhisattva practice emphasized in both the Awakening of Faith and the Treatise of Vajrasamādhi Sūtra stresses public propagation as an altruistic act based on the power of original enlightenment that everyone possesses.This emphasizes practical cultivation and communal healing suitable for the common people exhausted by the wars of the Three Kingdoms and successive wars with the Tang in the 7th century. Taesoo Kim and Dugsam Kim also note that in Wonhyo's Essentials of the Sūtra of Immeasurable Life, the scope of rebirth in the Pure Land gradually expands as the sections or stages progress.Particularly in the fourth stage of the Land of Immeasurable Life, women, the disabled, Hinayana practitioners, and ordinary people are included as candidates for rebirth.This emphasizes the characteristic of equality, suggesting that all beings have the potential for rebirth as buddhas and thus speak with one unique voice, being free from discrimination (Kim and Kim 2023, p. 24).Vermeersch also suggests that the usage of 'integrating the three and returning to the one' (hoesam kwiil) in works such as the Commentary on the Lotus Sūtra, similar to the situation of Zhiyi of the Tiantai school, stems from an interest in integrating opposing communities amidst the unification wars of the Three Kingdoms (Vermeersch 2015, pp. 95, 114). During the early Silla dynasty, the ruling class consisted of the 'sacred bone' (seonggol) lineage of the 'copper wheel' (dongryun-gye 銅輪界) rank, who viewed themselves as universal monarchs (cakravartin) and descendants of the Śākya clan (Kim 1987, pp. 33-35).However, by the time of King Taejong Muyeol (太宗武烈王, 603-661), contemporary to Wonhyo, the ruling system had shifted to the 'true bone' (jingol) royal family of the 'chariot wheel' (saryun-gye 舍輪界) rank, which was regarded as one level below the sacred bone.Unlike the aristocratic Buddhist tradition closely associated with the early dynasty, as exemplified by monks like Jajang (慈藏, 632-647) and Wongwang (圓光, 542-640), Wonhyo came from the sixth-rank (yukdu-pum) class and dedicated himself to popularizing Buddhism among the masses, collaborating with other monks of lower social status such as Daean, Hyegong, Hyesuk, and Sabok. Furthermore, Wonhyo's egalitarian interpretation of Buddhist doctrine, as reflected in his commentaries on texts like the Awakening of Faith and the Vajrasamādhi Sūtra, empha-sizes the fundamental equality of all beings and the universal potential for enlightenment.This perspective aligns with the aspirations of community psychology, which advocates for equality, equity, fair shares, and liberation (Kloos et al. 2012, pp. 55-57, 236-37).His approach to Buddhist philosophy, particularly his concept of 'Integration of Two Enlightenments' (二覺圓通) and the structure of 'One Mind, Two Aspects' in his interpretation of the Awakening of Faith, provided a theoretical framework for reconciling apparent contradictions and promoting inclusivity in Buddhist practice.This inclusive and egalitarian approach to Buddhist thought and practice can be seen as a response to the changing social and political dynamics of Silla society as well as an attempt to make Buddhist teachings more accessible and relevant to a broader audience. Specifically, Wonhyo's philosophy of the middle played a vital role in healing the wounded hearts of the Silla people, who were enduring the hardships of the unification wars with Goguryeo (高句麗) and Baekje (百濟), followed by the wars with Tang China, by offering hope through the practice of chanting the name of Amitabha Buddha, suggesting that anyone could be reborn in the Western Pure Land through this practice.Particularly noteworthy are Wonhyo's teachings on 'One Mind', 'harmonizing disputes', Maitreya beliefs, Hwaeom (華嚴) and Pure Land Thought, which are not confined to theory but developed into community practices emphasizing equal participation.These teachings had a significant influence on his descendants.For instance, Uicheon (義天, 1055-1101), a National Preceptor (Daegak Guksa 大覺國師) as well as a prince monk of the Goryeo (高麗) dynasty, emphasized Wonhyo's 'harmonizing disputes' in an attempt to reconcile the divisions among the doctrinal schools and to promote the unity of Seon and the doctrinal teachings.In light of these objectives, Wonhyo is posthumously honored with the title 'National Preceptor of Doctrinal Reconciliation' (Hwajang Guksa).This designation reflects Wonhyo's significant contributions not only to Buddhist thought, but also to community integration, particularly his influences on harmonizing and integrating various doctrinal positions and communities. Further, this posthumous title underscores Wonhyo's enduring legacy as a synthesizer of Buddhist thought and his role in making complex Buddhist ideas more accessible to a broader audience.It also highlights the continued relevance of his inclusive and harmonizing approach to doctrinal and ideological differences in the Korean Buddhist community.According to Vermeersch, 'the fact that Uicheon was granted a title in 1101, which strongly resembles that of master of pacifying the disputes further confirms that this aspect of his work continued to hold appeal long after he passed away'.Yet, it can also be noted that 'Wonhyo's title was likely conferred at the instigation of Uicheon, who wanted to use Wonhyo as a springboard for his own project of integration through founding the Chontae (天台) school' (Vermeersch 2015, p. 106). Another example would be Jinul (知訥, 1158-1210) who led Buddhist reform movements in the middle of the Goryeo era.The legacy of Wonhyo's approach can be seen in Jinul's efforts to harmonize concentration (samādhi) and wisdom (prajñā) (定慧雙修), and doctrinal teachings and Seon traditions (禪敎一致), particularly in his establishment of the Suseonsa (修禪社) Seon practice community.Jinul's approach to Buddhist reform and practice was characterized by several key elements that reflect Wonhyo's influence: (1) Integration of different traditions: Like Wonhyo, Jinul sought to harmonize seemingly disparate Buddhist teachings and practices.He particularly focused on integrating Seon meditation with doctrinal study.(2) Emphasis on non-duality: Wonhyo's philosophy emphasized the underlying unity of various Buddhist doctrines.Similarly, Jinul stressed the non-dual nature of sudden enlightenment and gradual cultivation, as well as the dual cultivation of concentration and wisdom.(3) Accessibility of Buddhist practice: Wonhyo aimed to make Buddhist teachings more accessible to a broader audience.Jinul continued this tradition by establishing the Suseonsa community, which provided a structured environment for both monastic and lay practitioners to engage in serious Buddhist practice. (4) Holistic approach to practice: Jinul's emphasis on the simultaneous cultivation of concentration and wisdom reflects Wonhyo's holistic view of Buddhist practice and community.This approach sought to balance intellectual understanding with experiential realization in everyday practice.(5) Reform of monastic institutions: While Wonhyo worked outside the established monastic system, his ideas influenced later reformers like Jinul who sought to revitalize Buddhist communities from within.(6) Emphasis on original enlightenment: Wonhyo's interpretation of the Awakening of Faith, which emphasized the concept of original enlightenment, influenced Jinul's understanding of sudden enlightenment and gradual cultivation. Likewise, Jinul's establishment of the Suseonsa community can be seen as a practical implementation of Wonhyo's harmonizing philosophy.By creating a space where different aspects of Buddhist practice could be integrated, Jinul sought to overcome sectarian divisions and promote a more holistic approach to Buddhist cultivation.In essence, Wonhyo's Hwajaeng legacy provided a philosophical and methodological foundation for Jinul's reform.Thus, by emphasizing harmony, integration, and accessibility, Wonhyo's idea and community practice contributed to the development of a distinctly Korean approach to Buddhism, ep. that of the harmony between community practice and awakened living that continues to influence Korean Buddhist thought and practice to his descendants. Another notable example of Wonhyo's later influence could be found in the faithbased movements like the White Lotus Society (白蓮結社) during the latter period of the Goryeo dynasty under military rule.This movement advocated for the secularization and practical applications of Buddhist community practices.Within this context, the monk Mugi (無奇, 14th century) of the White Lotus lineage played a comforting role for people suffering under the late Goryeo military regime by emphasizing salvation through the chanting of Amitabha Buddha's name and integrating teachings from the Tiantai, Pure Land, and Amitabha doctrines.Further to this, the communal participatory spirit of Korean Buddhism was also demonstrated through the activities of warrior monks during the Japanese invasions of Korea (Imjin War 壬辰倭亂) led by Hyujeong (1520-1604) and Yujeong (惟政, 1544-1610), and the national defense efforts by Seonsu (善修, 1543-1615) following the Manchu invasions (1636)(1637).This spirit can also be found in the theory and practice of the Buddhist Reformation during the Japanese Colonization period and modern era, such as the reformist movements of Han Yongwoon, Paek Yongsung (白龍城, 1864-1940), and Kim Iryeop (金一葉, 1896-1971) and participation in the March 1st Independence movement of 1919 as well as Tanheo (呑虛, 1913-1983)'s Popular Buddhist movements and Seongcheol (性徹, 1912-1993)'s Buddhist cleanup movements in contemporary Korea (Kim 1998, pp. 191-205;Buswell 2014, pp. 1-320;Nelson 2016Nelson , pp. 1049-51;-51;Ko 2012, pp. 41-79;Hwang 2015, pp. 7-25;Park 2010, pp. 1-15;2020, pp. 155-82). In this tradition, Wonhyo's legacy of Hwajaeng, Non-hindrance, and equal potential for enlightenment free from discrimination, oppression, injustice, deep-rooted societal prejudices, and rigid monastic precepts aligns with various aspects of community psychology.These parallels include the following: Liberation from individual and institutional prejudice (Kloos et al. 2012, pp. 234-36). Human rights and women's movements based on equal humanity (Kloos et al. 2012, p. 115). These concepts encompass various dimensions of human diversity, including race, ethnicity, gender, socioeconomic status, social class, ability/disability, and spirituality (Kloos et al. 2012, p. 245).They share similarities with Wonhyo's legacy in Korean Buddhism, particularly when approached from liberation or community perspectives aimed at promoting community mental health and welfare (Levine and Perkins 1997, pp. 3-6, 48-49, 430-31).Thus, we can see that Wonhyo's philosophy and his legacy of One Mind and Hwajaeng align with community practice and equal humanity in a modern context. Based on these common orientations in the psychology of the Korean Buddhist community since Wonhyo, this discussion will examine the philosophical roots and practices of Wonhyo's thought from two perspectives: first, the individual within the community, and second, the individual's orientation towards the community.To this end, the discussion will focus on 'One Mind' and the distinction between enlightenment and nonenlightenment from the perspective of 'the individual within the community', and on the meditation practice of 'One-flavor' (一味觀行) and Bodhisattva practice from the perspective of 'the individual's orientation towards the greater community'.Finally, by summarizing the above discussion, this paper will explore how Wonhyo's thought, through his relational thinking of 'harmonizing disputes', can contribute to solving psychological issues of individuals and communities. Aspects of the Individual within the Community: One Mind and Two Gates, Enlightenment and Unenlightenment Wonhyo describes the foundational and ultimate goal of his philosophy, the One Mind, not in apophatic (negative) but in kataphatic (affirmative) language.The concept of the One Mind transcends the dialectic of negation, representing the point where thought ceases; thus, it is inevitably referred to as the One Mind.In the Commentary on the Awakening of Faith, a representative description of Wonhyo's epistemological ontology appears as follows: What is meant by 'One Mind'?It refers to the non-dual nature of all dharmas, both defiled and pure.The two gates of truth and delusion cannot be different; hence they are called 'one'.This non-dual locus is the reality within all dharmas.Unlike empty space, its nature is inherently numinous and aware, thus it is called 'mind'.However, if there is no duality, how can there be oneness?If oneness is non-existent, to what does 'mind' refer?Such a principle transcends language and thought.Not knowing how else to designate it, we forcibly term it 'One Mind'. 13 As demonstrated in the aforementioned passage, Wonhyo, in his Commentary and drawing from the Laṅkāvatāra Sūtra, elucidates the concept of One Mind from two perspectives.From the standpoint of non-duality, he characterizes it as 'the name of quiescence', emphasizing its inexpressible and indiscernible nature.Conversely, from the perspective of language and thought, he equates it with 'Tathāgatagarbha'. Subsequently, Wonhyo explicates One Mind in terms of its two aspects: the gate of suchness (真如門, tathatā) and the gate of arising and ceasing (生滅門, utpāda-nirodha): To elucidate the gate of suchness, it encompasses the common characteristics of both defilement and purity.Beyond these common characteristics, there are no separate defilement and purity.Therefore, it comprehensively subsumes all dharmas of defilement and purity.As for the gate of arising and ceasing: It distinctly manifests defilement and purity.The dharmas of defilement and purity are all-encompassing, thus they also subsume all dharmas in their totality. 14 As cited above, within the One Mind, there are two aspects or gates (二門): the aspect of Suchness based on emptiness, and the aspect of arising and ceasing that engages with the mundane world.In Wonhyo's structure of the One Mind's Dharma-realm, the former represents the transcendental realm akin to the dimension of emptiness (śūnyatā) where all differences dissolve, everything becomes interconnected, and all boundaries disappear.In short, it is a realm where no distinctions or demarcations can be established.However, the latter represents the mundane world in which sentient beings coexist.Yet, as all phenomena of the six realms arise due to the One Mind, it also serves as the ground for everyday activities of individuals in society based on linguistic discourse and understanding.That is, because of the One Mind, all phenomena of the six paths occur, thus also forming the basis of the mundane world where sentient beings live together.Accordingly, we can see that not only individuals but also the communal society itself manifests the One Mind as a one dharma world. Regarding Wonhyo's interpretation of the One Mind and the two gates that 'there is no other Dharma outside the One Mind', Hyung-hyo Kim names the two gates of the One Mind, the gate of arising and ceasing, and the gate of Suchness, as the experiential world and the transcendental world, respectively.According to him, the experiential world is the world represented by consciousness through the perception of sensory conditions and the integration of perceived differences.In terms of Yogācāra terminology favored by Wonhyo, experience is the cognition where the world of the five senses, drawn by the individual differences in the functioning of the first five consciousnesses and the sixth consciousness, is represented (Kim 2006, pp. 287-88). In this regard, Wonhyo's explanation of the One Mind and two gates can be interpreted through the lens of Yogācāra's three natures theory. 15This unfolds on the epistemological premise of classical Yogācāra thought, which posits that 'what is characterized by non-discrimination within the dependent nature is Suchness'. 16Building on this premise, Wonhyo draws from the 'Chapter on the objective aspect of cognitive objects' from the Mahāyānasaṃ graha to elucidate the concept of 'illusory discrimination (abhūtaparikalpa)' or other-dependent nature within the context of psychological differentiation. 17Particularly, by quoting the Mahāyānasaṃ graha, it advances and develops the schematic of the Yogācāra school, positioning the pivotal role of other-dependent nature as relationality. 18Therefore, the structure of transformation between defilement and purity in other-dependent nature establishes the dual ontology of Suchness, and the arising and ceasing aspect 19 through a positive relation achieved by negating the negation.Furthermore, the epistemological ontology concerning the two truths, the ultimate and the conventional, along with the three natures-pervasively conceptualized nature, other-dependent nature, and perfectly accomplished nature of reality-aligns perfectly with the ultimate truth and associates the conceptualized reality as the conventional truth through the mediation of other-dependent nature.This interdependent relationship serves as a foundation linking ultimate truth with cyclic existence, Buddhas with sentient beings, the individual's unconscious with consciousness, and the individual with the community.To this end, Wonhyo correlates the gate of Suchness within the One Mind with the Tathāgatagarbha, encompassing both emptiness and non-emptiness and the cyclic existence as corresponding to the ālaya-consciousness, where birth and death are in harmony, being neither one nor different. As it is said, 'arising and ceasing depends on the Tathāgatagarbha, thus there is a mind of arising and ceasing', this does not mean abandoning the Tathāgatagarbha to adopt the mind of birth and death as the gate of arising and ceasing.This should be understood as 'this consciousness has two meanings', both of which reside within the gate of arising and ceasing.The so-called non-arising and non-ceasing harmonize with arising and ceasing, neither being one nor different, is referred to as the ālaya-consciousness. 20 As the quote indicates, Wonhyo explains the problem of the arising and ceasing of the mind through the concept of Tathāgatagarbha.He also sees Tathāgatagarbha as a basis for explaining the relationship between arising and ceasing, and non-arising and non-ceasing.Moreover, he elucidates how all phenomena of arising and ceasing relate to the world of Suchness, interpreting Tathāgatagarbha in terms of enlightenment and nonenlightenment. Regarding the arising and ceasing of mind: Due to the Tathāgatagarbha, there is a mind of arising and ceasing.That is to say, the non-arising and non-ceasing combines with arising and ceasing, being neither identical nor different.This is called the ālayavijñāna… Therefore, it is said that this consciousness possesses two aspects.What are these two?First, the aspect of enlightenment, and second, the aspect of non-enlightenment. 21 In terms of enlightenment, the world of Suchness is a realm of enlightenment, while the world of arising and ceasing is one of non-enlightenment.The meaning of enlightenment refers to the Dharma-realm that transcends thought and the marks of thought, pointing to the Tathāgata's Dharma-body (dharmakāya) of equality.However, enlightenment and nonenlightenment are not absolute and unchanging states.Non-enlightenment is established by original enlightenment, and original enlightenment also awaits non-enlightenment-an interaction of enlightenment and non-enlightenment.This means that non-enlightenment and original enlightenment coexist within the same mind.Furthermore, original enlightenment generates the initial enlightenment through the mysterious perfuming of virtuous habits, which then returns to the original enlightenment. 22Wonhyo argues that the essence of original enlightenment denotes the Tathāgata's Dharma-body of equality but also contends that original enlightenment interacts with both non-enlightenment and initial enlightenment, thus lacking self-nature.Since it lacks self-nature, there is not a fixed state of enlightenment; however, meaning is created through mutual relationships, which is why it can be termed enlightenment. Wonhyo presents a dualistic theory of interaction, where the contradictory properties of original enlightenment and non-enlightenment coexist within the same mind, allowing for a return to original enlightenment through initial enlightenment or a fall into non-enlightenment.Moreover, he emphasizes that when original enlightenment, nonenlightenment, and initial enlightenment reach the state of One Mind, one must redirect their enlightenment for the benefit of the community.In this context, enlightenment serves as a practical basis for breaking through the limits of individual and communal perception, allowing all beings to realize they share one enlightenment (original enlightenment) and Buddha-nature. 23 All sentient beings share the same original enlightenment, hence the term 'one enlightenment (一覺)'….The statement 'all sentient beings are originally enlightened' expresses the meaning of original enlightenment.'Realizing that all sensory consciousnesses are quiescent and without arising' expresses the meaning of initial enlightenment.This reveals that initial enlightenment is identical to original enlightenment. 24 As elucidated above, the realization that enlightenment itself does not possess an inherent nature leads one to a state of neither sameness nor difference, that is, to a singular enlightenment about 'neither identity, nor difference'.Due to the absence of inherent nature, perception, and true understanding are neither the same nor different.This is also why the reality of existence is seen in a state of neither identity nor difference.From the perspective of singular enlightenment, the coexistence of differences in perception and true understanding reveals the inherent wisdom and compassionate power of the mind, enabling the practice of infinitely meritorious deeds.In this context, Wonhyo's quest encompasses the realms represented by Tathāgatagarbha and ālaya-consciousness, covering the aspects of Thusness and arising and ceasing, enlightenment, non-enlightenment, and initial enlightenment without deviating from the state of One Mind and one enlightenment. 25 Wonhyo, following the Laṅkāvatāra Sūtra, states, 'The name for cessation is called One Mind, and One Mind is referred to as Tathāgatagarbha'. 26The aspect of Thusness is a realm beyond experience, containing only the essence of emptiness, its nature invisible, while the aspect of arising and ceasing encompasses both the essence of Thusness and its phenomenal, sensory existence, including the acts that transform the mind through virtuous deeds (Kim 2006, pp. 288-89).In this regard, Suzuki also takes note of the non-dual approach of Laṅkāvatāra Sūtra that Wonhyo follows: As a man clings to his own false assumptions, he erroneously discriminates between truth and falsehood, and on account of this false discrimination, he fails to go beyond the dualism of opposites, indeed he cherishes falsity and cannot attain tranquility.By tranquility is meant singleness of purpose (or oneness of things), and by singleness of purpose is meant the entrance into the most excellent samādhi, whereby is produced the state of noble understanding of self-realization, which is the receptacle of Tathāgatagarbha (Suzuki 1961, p. 91). As evidenced by these quotations, the essence of self-realization proposed by the Laṅkāvatāra Sūtra is a non-dual understanding of the oneness of phenomena, transcending dualistic discrimination of opposites.Wonhyo arrives at a similar conclusion, yet offers a unique explanation involving the cohabitation of differences that encompasses both the transcendental and worldly realms, ultimately leading to the dharma of One Mind.The transcendental aspect of Thusness is a realm of immutable principles, yet these principles also enter into the realm of worldly phenomena, and the phenomena of arising and ceasing are not separate from the essence of Thusness.Therefore, Thusness and arising and ceasing, principles and worldly phenomena, transcendence and experience, individual and community are neither dichotomously divided nor monistically integrated, hence described as a dharma of One Mind with two aspects that merge without making them neither one nor two. According to Wonhyo, although Thusness possesses limitless virtues, it is devoid of differentiation, being equal in nature, a singular Thusness.However, differentiation appears through the manifesting and ceasing aspects of karma.That is, all dharmas are solely mind, truly devoid of delusive thoughts, but sentient beings, possessing delusive minds, fail to realize this and perceive all realms, because the wisdom and illumination concerning Thusness prevent delusive thoughts from arising in the nature of the mind, and the intent to illuminate the Dharma-realm fully prevents the mind from adhering to erroneous views (見, dṛ ṣ ṭ i), thus revealing a multitude of pure virtues greater than the sands of the Ganges. 27 In this context, as the embodiment of One Mind, Thusness is non-arising, non-ceasing, equal without discrimination, and vast without limits, thus referred to as the 'Great essence'.Moreover, because Thusness exhibits limitless virtues, it is also referred to as the 'Great characteristic'.Thus, the 'Two Greats' of essence and characteristic express One Mind in terms of essence and virtuous qualities.The 'Great function' is clarified in the Sūtra as generating all good causes and effects in both the mundane and supramundane realms. 28 The explanation further elucidates as follows. Furthermore, regarding the function of true suchness (tathatā): It refers to all Buddhas and Tathāgatas who, while still in the causal stage, generated great compassion, cultivated various perfections (pāramitās), embraced and transformed sentient beings, and established great vows.They aspired to liberate all realms of sentient beings without limitation of time, extending into the infinite future.They regarded all sentient beings as their own bodies, yet did not grasp at the characteristics of sentient beings.What does this mean?It means they truly understood that all sentient beings and their own bodies are equal in true suchness, without distinction.Possessing such great expedient wisdom, they eliminated ignorance and perceived the original dharmakāya.Naturally, there arose inconceivable karmic functions of various kinds, which were identical with true suchness in all places….They merely appear to function in accordance with sentient beings' perceptions and attainment of benefits.Hence, this is expounded as the functional aspect. 29 From these quotations, we can discern how all mundane activities in Mahāyāna Buddhism relate to supramundane realms such that the actions of sentient beings transcend mere acts of giving.These actions are instead referred to as 'function' because they benefit other sentient beings and worlds based on insight into ultimate reality.When one bases one's wisdom solely on the indiscriminate principle of the 'One Mind', the distinctions between Thusness, and the arising and ceasing of mind become neither entirely different nor identical.Consequently, all actions tend to benefit others out of compassion. In this regard, the song of Bodhisattva Mahamāti in the Laṅkāvatāra Sūtra concisely explains the essentials of this Mahāyāna spirit. When thou reviewest the world with thy wisdom and compassion, it is to thee like the ethereal flower, and of which we cannot say whether it is created or vanishing, as the categories of being and non-being are inapplicable to it.When thou reviewest all things with thy wisdom and compassion, they are like visions they are beyond the reach of mind and consciousness, as the categories of being and non-being are inapplicable to them….In the Dharmakāya whose self-nature is a vision and a dream, what is there to praise?Real existence is where rises no thought of nature and no-nature….With thy wisdom and compassion, which really defy all qualifications, thou comprehendest the ego-less nature of things and persons and art eternally clean of the evil passions and of the hindrance of knowledge.Thou does not vanish in Nirvāna, nor does Nirvāna abide in thee; for it transcends the dualism of the enlightened and enlightenment as well as the alternatives of being and non-being (Suzuki 1961, p. 89). As can be seen in the above passage, we can observe that in this egoless self-penetrating insight of Dharma-body based on wisdom and compassion, dualisms such as enlightenment and non-enlightenment, as well as the demarcation between existence and non-existence, are transcended.With this enlightenment, every activity can be performed in a way that benefits others and the entire community as part of one Dharma-realm. In this tradition, Wonhyo proposes that essence, characteristic, and function ultimately reveal themselves as different aspects of the One Mind.From the enlightened perspective of the Dharma-body, whether individual or community, all are merely different facets of the One Mind, distinguished only in terms of worldly phenomena. 30On the other hand, delusion, ignorance, or discrimination represents a misunderstanding of the mode of existence of things.Dividing subject and object is a processual act of finite cognition and not an absolute one.To overcome this, the enlightenment of One Mind should involve both the will and the intellect.It is an act of intuition born of the will.This will seek to know itself as it truly is (yathābhūtam dassana), free from all its cognitive conditions (Suzuki 1961, p. 126). Likewise, Wonhyo's concept of One Mind enlightenment, akin to the holistic perspective in community psychology, transcends individual discrimination.In this view, the actor, objects, and moral values exist in relation to the whole as it is.Thus, enlightenment is not confined to one's individual realm but extends to the entire Dharma world, aligning with the true nature of the One Mind.This perspective resonates with community psychology's emphasis on understanding individuals within their broader social and environmental contexts.In this regard, the content and orientation of Wonhyo's enlightenment share several points of convergence with the assertions and perspectives of community psychology: (1) While grounding his approach in the enlightenment of One Mind, Wonhyo does not confine himself to individual liberation but considers the entire society as a unified Dharma-realm in which the individual is embedded. (2) Wonhyo guides potentially abstract enlightenment theory and its associated practices towards concrete action and implementation.This parallels how psychology, initially focused on individual physiopsychological issues, expanded its scope through community psychology.Similarly, Buddhist thought, originally centered on personal enlightenment, broadened its purview through Wonhyo's philosophy.Both share an emphasis on practical application and problem-solving. (3) Like Wonhyo, community psychology emphasizes 'research through reflexivity' (Kloos et al. 2012, p. 106).This approach naturally fosters mutual respect for human values and stresses 'attending to unheard voices' (Kloos et al. 2012, p. 78), aligning with Wonhyo's emphasis on returning to One Mind and benefiting all sentient beings.Consequently, community psychologists 'seriously contemplate whether to identify themselves as researchers or practitioners (Kloos et al. 2023, p. 122). (4) In Wonhyo's view, identity and difference form a relational understanding where one exists in relation with others, and altruism interconnects with self-identity.This epistemological insight allows for the fusion of identity and difference into a non-duality based on perspective within a community, where individuality and community action coexist in a non-synchronous synchronicity.This represents a dialectic of harmony in differences, housing diversities in One Mind, or common values and assumptions. Hence, Wonhyo's view of enlightenment shares aspects of community psychology, as both embrace a dialectic of harmony in differences.This approach interprets distinctions between identity and difference, individual and community, and reflection and action not as contradictions, but rather as interdependent relations within a broader context. The Aspect of Individual Orientation towards the Community (the Greater Self): The Practice of One-Flavor and the Bodhisattva Path As outlined earlier, One Mind explicates the manner in which the minds of sentient beings can transform into the dimension of the Buddha-mind, thereby enabling the dharmas of Hwajaeng.That is, One Mind does not imply abandoning the cycle of arising and ceasing to directly enter the realm of Thusness.Given that the wandering mind of sentient beings and the Buddha-mind, which enjoys bliss in the gate of Thusness, are not separate but coexist within the same dharma of One Mind, the moment individuals grasp the reality of truth at the conventional level by realizing that Hwajaeng (solution) can be drawn from the One Mind (problem) itself, such individuals are likely to embark on the Bodhisattva path of Mahāyāna.In this context, the ontological contemplation of the One Mind intersects with the community psychology of practice.Specifically, the One Mind in Mahāyāna integrates the minds of individuals and communities into a practice of One-flavor.This integration forms a significant interrelationship, bridging the duality of the gate of Thusness and the cycle of arising and ceasing, and one (problem) as a potentiality and the many (solution) as an actuality.It manifests the interplay between enlightenment and ignorance, Bodhisattva and sentient beings, and meditation and the Bodhisattva path.Moreover, it highlights the interconnected character of individuality and community, emphasizing the dual aspects of neither identity nor difference within the interpenetrated reality of One Mind (Kim 2006, pp. 284-85). Wonhyo viewed the minds of sentient beings and Bodhisattvas as two aspects of the same mind, in that both seek and practice enlightenment according to Buddha's teachings of dependent origination.When covered by primal ignorance and the ensuing greed, hatred, and delusion, one is a sentient being in a state of ignorance on the causes and conditions of dependent origination, as well as the interpenetrated reality of the one Dharma world.However, when one realizes the inherent perfection known as Buddha-nature or Tathāgatagarbha, one becomes a Bodhisattva in a state of original enlightenment.Reaching the ultimate enlightenment of One Mind, truth transcends sensory perception and concepts, allowing for equal and non-discriminatory insight into the perceiver's cognition and the world.In turn, this foundation of miraculous Bodhisattva activity transforms cognition itself into action, without the distinction between subject and object, self and others. In this context of unhindered realization and action, Wonhyo wandered, spreading the practice of Non-Obstruction after illuminating the principles and methods of enlightenment, using the great gourd as a symbol of unhindered Bodhisattva practice to enlighten the community's beings suffering from turmoil. 31Within the context of these unimpeded practices, the Treatise on the Vajrasamādhi Sūtra teaches samādhi and chanting as practices of enlightenment that return to the source of One Mind.The samādhi of One-flavor is psychological healing through contemplation of the interdependent relation between oneself and others, cognition and object, realizing that neither self nor others inherently exist alone, and thus transcending the dichotomy of truth and convention by training in the dharma that both are inherently empty. Regarding the essence of the teaching (宗體) in the 'samādhi of One-flavor' section 32 of the Treatise on the Vajrasamādhi-sūtra, Wonhyo elucidates that the Sūtra transcends the dichotomy between truth and convention based on the wisdom of the two enlightenments: original and initial enlightenment.Both are indestructible and unrisen, presenting the object and wisdom as inherently non-arising and empty.The wisdom of original enlight-enment and initial enlightenment is also unrisen.Therefore, original enlightenment and initial enlightenment, subject and object, and wisdom and compassion form a relationship that is neither one nor two but also not merging into one. Since the object enters into cognition and others into one's being, others are not separate entities from oneself.This realization allows for a middle way that does not lean towards extremes in the perspectives of self and others, the individual and the community, recognizing the interconnectedness of oneself and others, and the individual within the community.This relational thinking posits that all distinct entities are equally interconnected, allowing for a common solution through a single enlightenment (one enlightenment).The One-flavor practice of the Vajrasamādhi Sūtra thus elevates oneself and others to a common meaning or One-flavor, further practicing the Bodhisattva path of non-duality and the integration of the individual and community through conciliatory thinking based on the one enlightenment. In the Commentary on the Awakening of Faith in the Mahāyāna, Wonhyo connects the practice of single-minded attention, or the One Mind, with the Bodhisattva path based on the principle of the One Mind.He states, 'It is because the One Mind gives rise to the activities of the six realms (六道) 33 that one is able to generate the vow of vast salvation' 34 to widely save sentient beings. The six realms of existence do not transcend the One Mind, thus enabling the arising of great compassion rooted in the understanding of shared identity.This elucidation dispels doubt and facilitates the generation of the great aspiration…It elucidates that although the various teachings are numerous, the initial stages of practice do not extend beyond two gates: cultivating cessation (śamatha) in accordance with the gate of suchness, and developing insight (vipaśyanā) in alignment with the gate of arising and ceasing.The simultaneous operation of cessation and insight encompasses myriad practices.By entering these two gates, one gains access to all gates.This clarification dispels doubt and enables the initiation of practice. 35 As evident from the above passage, Wonhyo's argument is that practitioners who have trained themselves in cessation and insight will arouse great compassion, recognizing the shared identity of individuals and community, based on the understanding that the six realms are not separate from the One Mind.Just as community psychology emphasizes the connection and context between individuals and communities (Kloos et al. 2012, pp. 10-11), Wonhyo underscores the connection and context between the practitioner's One Mind and the whole community as a single Dharma-realm.In this context, embodying this Bodhisattva spirit and aiming to encourage people to embark on the path to One Mind, Wonhyo emphasizes that the practices of cessation and insight are indispensable to each other for entering the path to enlightenment, likening them to the two wings of a bird or the two wheels of a cart. 36 Wonhyo's interconnected practice spirit of unity between internal practice (returning to the source of One Mind) and external practice (benefiting sentient beings) is also evident in the Treatise on the Vajrasamādhi Sūtra.This spirit is highlighted in his teachings on the Bodhisattva path and ethical conduct for practitioners in the Commentary on the Chapter of the Bodhisattva Precepts in the Brahmāʼs Net Sūtra, and the Essentials of Observing and Violating the Bodhisattva Prâtimokṣ a. 37 He advocates for a profound understanding that karma and retribution can vary in different contexts, urging a deep insight into the varied circumstances of sentient beings. 38 For instance, in his work Essentials of Observing and Violating the Bodhisattva Pratimokṣ a, Wonhyo comprehensively examines what actions (karma) lead to positive outcomes (retribution) by considering various aspects such as the actor's intentions and context.Through this analysis, Wonhyo's nuanced approach to Buddhist ethics extends beyond individual considerations to encompass broader contexts and the interplay between individuals and their communities.Especially, in evaluating the moral quality of actions and their consequences, Wonhyo's methodology aligns with community psychology's emphasis on 'con-text', which encapsulates all structural influences affecting an individual's life (Kloos et al. 2012, pp. 10-11).Both approaches consider the wider structural framework in which ethical behavior occurs, focusing on the relationships of persons and contexts (Kloos et al. 2012, p. 11), demonstrating a holistic perspective on moral evaluation.Wonhyo does not judge based on absolute standards or distinctions between self and others. In this way, Wonhyo introduces an ideal community ethic that gently guides sentient beings based on their unique circumstances.Following this logic, all beings possess Buddha-nature and, in an interconnected one Dharma-realm, have the potential to rediscover their original mind, the One Mind, ensuring not only their salvation but also that of other sentient beings.This humanistic and relational spirit is foundational to his teachings, sharing similarities with community psychology's emphasis on the connection between individuals and the community based on open recognition and communication of values and assumptions as well as participatory community action in research. Furthermore, Wonhyo unfolded an egalitarian communal practice of Non-obstruction with the enlightenment that anyone can achieve Buddhahood through uncovering the Tathāgatagarbha, or Buddha-nature. 39This egalitarian orientation towards individuals and community as the greater self is also found in community psychology's emphasis on equality.Community psychologists' egalitarian consciousness is emphasized through their approach to problem-solving from a structural or relational, rather than an individual perspective, and is implemented through respect for diversity (Kloos et al. 2012, pp. 55-56).In conducting research, community psychologists maintain equality in their relationships with research participants.Like Wonhyo, they eschew the sense of superiority or posture that researchers may easily fall into, as well as involvement through admonition and education, instead maintaining an equal relationship while exploring.Thus, they emphasize that 'your attitude of respect and willingness to listen to the observational subjects may be more important than what you do' (Kloos et al. 2012, pp. 55-57, 91-93;2023, p. 149). This attitude connects with the practice of One-flavor and the Bodhisattva path in that it links self-benefit and benefiting others in a non-dualistic manner.Likewise, Wonhyo's egalitarian views on the Bodhisattva path of practical humanism and relational Buddhist psychology find a similar spirit of liberty and equality in community psychologists' participatory approach to community action and research that offers respect for all individuals and healing for the collective psyche.In both schemes, the researcher-community, or practitioner-community relationship can be compared to that of guest and host (Kloos et al. 2012, p. 80), partner, or collaborator (Kloos et al. 2012, pp. 80-83). The Relational Thinking of Hwajaeng and Community Psychology Up to this point, we have examined the community's psychological characteristics based on Wonhyo's One Mind practice view of One-flavor, which forms the foundation for Hwajaeng's relational thinking aimed at leading many to liberation.For this purpose, this study has looked into the dual relationship between enlightenment and non-enlightenment connected to the One Mind as an aspect of individuals within the community and discussed One-flavor Bodhisattva practices as aspects oriented towards the community (the greater self).Viewed through the lens of core concepts of community psychology such as 'connection' and 'relationship', a major characteristic of Wonhyo's theory and practice is his emphasis on arousing practitioners to recognize the interconnected relationship between various aspects of enlightenment and Bodhisattva practice within the structure of the Dharma-realm, as positioned in One-flavor practice. With reference to this, this study aimed to demonstrate that the humanistic and relational characteristics of the One Mind and Hwajaeng thought not only form the basis for a healthy community among human communities but also extend to all living beings and the entire Dharma world.Furthermore, this study examined the community psychological linkages of Bodhisattva practices spread through social practice without hindrance by Wonhyo, based on such theoretical foundations. Wonhyo is renowned for his life of Non-hindrance, which transcended both the monastic and secular lives.He authored over 200 works and, after breaking the monastic precepts, embraced the role of a lay practitioner (小性居士).In this capacity, he disseminated his teachings to the public by singing the song of Non-hindrance in marketplaces.This act was a manifestation of his enlightenment through the concept of the One Mind, which subsequently informed his practice of Mahāyāna Bodhisattvahood and the ideals of Hwajaeng.Central to his philosophy is the notion of Dharma interdependence, articulating that individuals, individuals within communities, and communities themselves are interconnected through the One Mind.This interconnectedness enables the resolution of seemingly conflicting ideas or perspectives by finding common solutions, a scheme Wonhyo termed 'Hwajaeng'.In essence, the One Mind as both potential and problem, underpins the spirit of Hwajaeng, through interconnected communication between various aspects of relational Buddhist practice, facilitating the discovery of solutions within the problems inherent in doctrines or communities themselves. In relation to Wonhyo's spirit of Hwajaeng, Kim (2006), in his interpretation of Wonhyo's Commentary on the Awakening of Faith in Mahāyāna, observes that the One Mind encompasses the dual concept of opening/sealing (開合).This concept unveils infinite meanings as the dharma of two gates when opened, and upon sealing, it surpasses the binary distinction between doctrine and essentials (宗要).Furthermore, in the Treatise on the Vajrasamādhi Sūtra, the origin of the One Mind is compared to the truth of the sealing (合) and essentials (要) of Mahāyāna, and the sea of three emptinesses to the truth of opening (開) and doctrine (宗) that Mahāyāna unfolds, where opening/sealing, doctrine/essentials (宗要), and proposition/refutation (立破) circulate and support each other like mutual dependence.Borrowing the concept of 'différance' from Derrida, Kim interprets this as symbolizing the 'cohabitation of differences' as a duality where difference coexists (Kim 2006, pp. 262-63). In this scheme of 'cohabitation of differences', everything exists in connection while maintaining differences.Seen from the ontological perspective, this can be interpreted in the context of existential logic that generously acknowledges the existing world as 'opening without complexity, uniting without narrowness, establishing without gain, and breaking without loss', contrasting entity (existence/nonexistence)-oriented thought with being-like thought, as seen from Heidegger's two schemes on Western philosophy.According to this scheme, traditional metaphysical thought, grounded in Cartesian rationality, is characterized as entity-like thought.In contrast, naturalistic, unconscious, and dynamic thoughtwhich emphasizes relations or processes in various contexts-is conceptualized as 'being', 'difference', or creative process-like thought, employing terms from Heidegger, Deleuze, or Bergson, respectively. 40In many cases, traditional Western metaphysics, i.e., the former, is presupposed as an egocentric and possessive ontic worldview.That is, rather than recognizing the appearance of all things as they are (seinlassen) 41 , it adheres to a Cartesian paradigm that distinguishes between subject and object according to the perception of the subject. This essentialist perspective fundamentally isolates differences under the notion of an inherent identity with a shared essence.Consequently, it views the relationships between differences in terms of 'exclusion and conquest', and considers 'difference, discord, and incompatibility' as fundamental attributes.In contrast, Wonhyo's perspective and the nonessentialist view in contemporary philosophy, which acknowledges differences in themselves and seeks the cohabitation and harmony of differences, align with the aims of community psychology.This alignment is evident in their emphasis on respecting opposing viewpoints and divergent reasoning based on dialogue (Kloos et al. 2012, pp. 58-59), and on 'reconciliation and communication of dissonances' or opposing viewpoints.This approach recognizes the inherent value of diversity and strives for a harmonious integration of disparate elements, rather than attempting to reduce them to a single, underlying essence. In this regard, Deleuze's thought aligns significantly with Wonhyo's view.Deleuze critiques the structure that oppresses the phenomenal world by fixating on a foundation of identity.However, he defines the identity underlying the diverse phenomenal world as a principle that continuously revolves around the differences in the phenomenal world.In this respect, he argues that 'difference is the only origin, and it makes the different coexist independently of any resemblance, relating the different to the different' (Deleuze 1993, pp. 163-64) and further argues that 'The task of life is to make all repetitions coexist in a space where difference is distributed' (Deleuze 1993, p. 2).Here, if the difference is compared to the 'being' in Heideggerian epistemological ontology rather than Cartesian substantialism, repetition can be compared to 'movement-nature qua nothingness' as intrinsic dynamism.Just as the One Mind is divided into true Suchness and arising and ceasing but remains interconnected to form a ground, difference and repetition also reveal different but interconnected aspects, unfolding at the extremity of becoming.According to Deleuze, 'To repeat is to behave, but in relation to something unique or singular, which has no similar or equivalent.And this repetition as external conduct echoes, for its own part, a more secret vibration, a more profound interior repetition within the singular that animates it' (Deleuze 1993, pp. 7-8). Based on this premise, Deleuze criticizes the dichotomous notion that simply defines the various afflicted dharmas (染法; desires akin to those in Pandora's box) manifested in the phenomenal realm of sensory perception as evil.Instead, he explains that all desires or afflicted dharmas that have ascended from the ground (fond) are also related to this ascending ground, expressing this through the mutually dependent concepts of difference and repetition (Deleuze 1994, p. 10;Kim 2018, pp. 201, 214, 219). Furthermore, through his interpretation of a new dialectic, Deleuze critiques the four modes of being that create representation in the realm of afflicted dharmas, seeking answers beyond these four modes (Deleuze 1993, pp. 386-87).In this process, Deleuze, like Wonhyo's Awakening of Faith-based thought, clarifies that his dialectic is structured such that the solution exists within the problem.It is a structure of opening and sealing where various differentiations occur from the problem field, but ultimately the solution exists within that problem.This is analogous to how the two aspects of tathātā and saṃ sāra appear within One Mind, but the solution (Hwajaeng) derived from the interconnection of each aspect again opens and seals (unfolds and returns) with One Mind.From Deleuze's perspective, this is similar to the synthesis method that can be concretized as an actuality of solution within the potential One Mind, which is the problem field.In this sense, individuals can recover their original source through mutual connection and interpenetration among various series, thereby benefiting sentient beings from the origin of One Mind.Just as Wonhyo pursued Hwajaeng through interconnected dialogue between different aspects, Deleuze also developed a mode of thought that produces actuality (solution) through mutual connection and interpenetration among various series differentiated from potentiality (problem).Through this process, the potential (or ground) is elevated to actuality, emphasizing the relative generative relationship between actuality and potential in the form of univocity, or a single meaning (Kim 2024, p. 102;Deleuze 1994, p. 253).Furthermore, as Wonhyo cautioned against attachment to substantiality through tetralemma negation or affirmation-existence, non-existence, both, or neither-Deleuze similarly critiqued the four shackles that create representation: identity (egalite; A = A), similarity (resemblance; A#B), opposition (opposition; A ̸ = non-A), and analogy {analogie; A/non − A(B) = C/non − C(D)} (Deleuze 1993, p. 386;Kim 2018, pp. 212-13). Moreover, both Wonhyo and Deleuze acknowledged the role and significance of any logical form or content if it aided in the enlightenment where the masses become the masters (Kim 2024, p. 104).This view of expedient truth also corresponds with the approach of community psychology.Like Deleuze, community psychology considers all perspectives to be valuable within the given systems, requiring proper problem definition (Problem), methodological equivalence considering various levels of analysis (Implementation: Interpenetration between various aspects), and research collaborations in which everyone wins (Solution) (Kloos et al. 2012, pp. 86, 91-96).This point also shares the spirit, process, and orientation of Hwajaeng, converging different views into a harmonious synthesis through meticulous deliberation and communication.All three approaches emphasize the shared aspects of diverse approaches and collaborations, particularly their inclusive and non-dogmatic stances towards different perspectives and modes of thought. Likewise, Wonhyo's view of 'cohabitation of differences' and community psychology's emphasis on 'cooperative partnership in which everyone wins' can be situated within the context of the epistemological ontology of modern and contemporary philosophy, sharing a path from Bergson's 'elan vital' and Heidegger's 'Sein' to Deleuze and Derrida's philosophy of 'difference', rather than the substance-centric, rational thought of the West.It is precisely from this mode of thought that the affinity with community psychology becomes apparent.While entity-based metaphysics of reason relies on differentiation that captures substantial beings through artificial reasoning, Wonhyo's and community psychology's relational thought represents a schema of epistemological ontology that affirms beings as they are, situated within their context.In this regard, it is evident that Wonhyo's philosophy aligns with the principles of community psychology, emphasizing the interplay between individuals and their communities, and identities and differences.This alignment mirrors Wonhyo's conceptualization of a faith community and the Dharma-realm as collective manifestations of lived experiences. In a similar vein, community psychology does not presuppose a static opposition of moral good and evil.The relationship between good and evil can change according to the situation and context.Community psychologists have shifted their focus from the previously prevalent individualistic perspective to emphasizing a structural perspective.For instance, this shift in perspective is applied to seek solutions for homelessness.Here, the focus is placed on the structure or ecology in which homeless individuals are situated, rather than on the homeless individuals themselves.This approach draws an analogy to the game of musical chairs, where a limited number of chairs are available.It warns that many societal problems, like this game, begin with the premise that someone will inevitably be left without a chair (Kloos et al. 2012, p. 6;2023, p. 31).According to community psychologists, this perspective prompts a reconsideration of the often-unnoticed inequalities between individuals in our society.By mitigating inequalities that arise from an individualistic perspective, discussions can proceed from an egalitarian standpoint, deliberating on practical solutions within the socio-structural contexts in which individuals are embedded. Likewise, in Wonhyo's relational thought of Hwajaeng, the emphasis is on manifesting harmonization through conflict resolution, highlighting the correlative difference of coexistence within a given structure.Therefore, we can observe that both systems of thought are not based on fundamentalism or principled dogmatism grounded in individualism or absolute authority.Instead, these perspectives evolve into relational thoughts that aim to resolve conflicts between the individual and others, as well as between the individual and the world, based on contextual factors or causal chains within a given structural or environmental framework (setting). In the context of a community or in relationships with other communities, adhering to a single viewpoint precludes Hwajaeng.Often, Wonhyo presents absolute affirmation through double negation or double affirmation to achieve One Mind.To this end, he advocates for a return to the universal source of One Mind, employing dynamic concepts like the Two Truths, Three Natures, and the Middle Way.These concepts bridge differences across contexts and perspectives, culminating in a unified resolution known as Hwajaeng.In other words, Hwajaeng links various differentiated sections from the problems presented within One Mind to produce a common solution.Notably, texts such as the Commentary on the Awakening of Faith, Special Exposition of the Awakening of Faith, and Treatise on the Vajrasamādhi Sūtra exhibit these characteristics of relational ontology and community psychology in a systematic and practical way.The core of the Hwajaeng ideology presented in Commentary on the Awakening of Faith and Special Exposition of the Awakening of Faith explains the coexistence of the differences between being and non-being within the horizon of the One Mind through a relational theory between the two aspects of the One Mind.Furthermore, Wonhyo describes this in terms of the relationship between enlighten-ment and non-enlightenment.The Treatise on the Vajrasamādhi Sūtra explains the relational theory unfolding from the One Mind by encompassing both the beginning and end of meditation practice (觀行).This practice involves abandoning appearances and returning to the original mind for personal inner cultivation, while relying on original enlightenment to benefit sentient beings through social edification, thus achieving the path of compassion and non-attachment through myriad practices. The Commentary on the Awakening of Faith and Special Exposition of the Awakening of Faith explain this dualism of life and death, good and evil, right and wrong, and purity and impurity, from the perspective of the Dharma-realm, presenting a multi-layered view of individual and social enlightenment in practice.That is, from the enlightened state, the Dharma-body of the Tathāgata is described as the equal dharmakāya with 'one appearance', and our minds are said to 'fully possess the original enlightenment'. 42Similarly, the concept of the One Mind in these texts is explained from two aspects (gates), both ontologically and phenomenologically.These teachings encompass all dharmas but manifest from the Tathāgatagarbha (interconnected with ālaya-consciousness), harmonizing without being either one or different. 43 Thus, all individual differences in dualism are presented within the structure of the One Mind Dharma-realm, according to context, as the preaching of the dharmakāya Buddha, the manifestation of the saṃ bhogakāya Buddha, or the edifying activity of the nir-māṇ akāya Buddha.Within this multi-layered structure, Bodhisattva carry out practices that resolve all disagreements and foster community engagement.In this respect, the relational thought of Hwajaeng, moving beyond a fixation on all distinctions and disputes, meets community psychology as the embodiment of the Mahāyāna spirit, aiming to liberate a great multitude.The foundation of liberation, the One Mind, situates every individual existence within a common context.The practice of One-flavor and Hwajaeng, which connects all aspects, becomes the path to freedom by liberating each individual within the structure (context) of the Dharma world.From this perspective, the relational thinking of harmonization through the practice of Hwajaeng provides a rich foundation and depth to community psychology. Specifically, Wonhyo's One Mind Dharma world and Hwajaeng approach, inclusive of all individuals and communities, proposes a comprehensive and healing methodology for community psychology in several ways: (1) It can be conducive to the constructivist approach, which emphasizes the connection between researcher and participant, the particular setting, and understanding participants' experiences and their meaning to participants, rather than just causes and effects (Kloos et al. 2012, pp. 99-100). (2) It can contribute to critical views that emphasize how social forces and belief systems influence researchers and participants as well as the researcher's responsibility for integrating research with social action. (3) It can promote participatory, collaborative community research processes before beginning research and making research decisions as well as on the products of research.In this process, Hwajaeng can serve as an example of developing a community research panel to address social issues. (4) It can help psychopolitical validity by examining whether the research process empowers citizens to become involved in liberating social change that benefits their communities.Specifically, Wonhyo's emphasis on equal and just footing for Hwajaeng can be applied to the attitude of participatory research in the researcher-community partnerships.This approach allows for optimal involvement with a broader understanding of cultural, social, and multiple ecological levels of analysis in given community contexts for well-being and social support networks (Kloos et al. 2012, p. 100). Regarding practical problem-solving, community psychologists demonstrate how shifting from an individualistic perspective to a structural or ecological perspective alters the way problems are defined and the interventions that can be considered, as exemplified in the case of homelessness 44 in their research action (Kloos et al. 2023, p. 34).The em-phasis on connection, context, respect for research subjects, and an attentive attitude in community psychology aligns closely with the core principles of Wonhyo's discourse on interdependence in a One Mind Dharma world and his discourse on interconnectedness in Bodhisattva practice.Within this framework of interconnection, karmic affinity, and mutual causality, individuals and communities, as well as humans and nature, can pursue harmonious coexistence. Conclusions In this discussion, we explored the direction and practice of community psychology through the lens of Wonhyo's philosophy of the One Mind, Hwajaeng thought, and Bodhisattva practice within Korean Buddhist philosophy.Among these, the emphasis was placed on the practice of One-flavor based on the One Mind as a fundamental basis for contextual actions proposed in community psychology. The methods suggested by Korean Buddhism and Wonhyo can significantly contribute not only to resolving internal community issues but also to addressing conflicts between communities.For instance, one may wonder how peaceful a community destined to coexist can be in the face of conflicts with other communities.Communities are, in essence, exclusive collectives bound by certain factors.Similarly, national communities tend to be insular with an inward orientation for the sake of national interests.This remains a prevailing issue amidst the ongoing conflicts between nations and groups.Thus, reflecting on the essence of life through the One Mind, Hwajaeng, Non-hindrance, and One-flavor practice offers insights into overcoming such issues, potentially leading to a paradigm shift towards recognizing mutual benefits. As discussed, the hidden driving force for realizing an ideal community can be found in Buddhism, particularly through Wonhyo's interdependent origination thought and its practical applications.Community psychology also demands an approach that allows seeing 'you and I' in a broader context by applying the concept of interconnectedness to reality and presenting a larger loop of connection.It is hoped that through such practical processes of connection, the fundamental issues that community psychologists grapple with can be addressed.Specifically, Wonhyo's theory and practice enable the respect for the motives, personalities, and actions of other members as an extended self, sharing both self and karma, because individuals and communities are interconnected within one Dharmarealm, transcending mutual benefits.Furthermore, through difficult consensus processes, the practice of Hwajaeng aiming at One-flavor, could facilitate the establishment of an environment where a community can exist and listen properly to its members' voices as an awakened community, seeking moral awareness and maturity of personal and collective personality.This approach is also expected to contribute to harmonizing the individual and the community, the ultimate and the conventional, spiritual practice and social development, as pursued in community psychology, thereby elevating the sociality of ethics. Korean culture is often likened to a bibimbap (mixed rice dish) or a patchwork, symbolizing a fusion or amalgamation of diverse elements.Similarly, Korean Buddhism aspires to realize an ideal Pure Land or the Land of Utmost Bliss, where diversity is preserved while the harmony of the community is emphasized.This aspiration for an ideal community, amidst a world rife with conflict and difficulty, has led Korean Buddhism to underscore virtues rooted in the non-duality of self and others, such as concession and empathy (putting oneself in another's shoes), highlighting the importance of understanding diverse perspectives. Within this context of problem awareness, this paper argues that Wonhyo's relational thinking, as demonstrated in his practice of One Mind, Hwajaeng, and Non-hindrance, offers valuable insights into the concept of connection in community psychology.Wonhyo's method of Hwajaeng, characterized by comprehensive or selective synthesis based on differences, involves a meticulous consideration of each series of causes and conditions, or the context of the relevant argument while acknowledging both commonalities and differences.This approach transcends mere acknowledgment of differences, promoting a shared basis and harmony, just as community psychologists attempt to do by applying relevant levels of analysis to the conditions of certain results within the whole structure or context.Essentially, the aim of this approach, shared by Hwajaeng philosophy and community psychology, represents a form of horizontal and open convergence, where seemingly contradictory doctrines are examined closely to reveal their underlying compatibility.Likewise, despite its challenges, Wonhyo's doctrinal interpretations and insights prove to be applicable in addressing issues within community psychology.In today's context, adopting Wonhyo's methods to reconcile modern societal contradictions and conflicts is not only beneficial but essential. In summary, the discussions above suggest that Wonhyo's Hwajaeng-centric philosophy, characterized by a humanistic approach and a relational Buddhist nature, offers valuable lessons for community psychology.Although each doctrine that Wonhyo harmonized belongs to specific schools, they not only maintain their independence but also amalgamate into a harmonious whole in a One Mind Dharma world.In this respect, his philosophy remains relevant in our interconnected nature of modern society, where independence is still necessary along with harmony.By harmonizing disputes and conflicts, Wonhyo connected Buddhist teachings with the cycles of saṃ sāra and nirvāṇ a, enlightenment and ignorance, deriving a common solution, utilizing various interconnected levels of analysis.Especially Wonhyo's One Mind Dharma world model, when applied developmentally in theory and practice, could lead to the gradual elimination of prejudices and misunderstandings, potentially rooted in individual and collective karmic consciousness.Furthermore, this model suggests, in some respects, an inclusive consideration of the Earth and other life forms in our Dharma world, based on compassion and mutual respect.This can include multiple locations and be applicable to microsystems and larger organizations as well.It also encompasses environment, situation, scene, community, place, and location (Kloos et al. 2023, pp. 45, 200).Such attitudes ultimately lead to the destruction of the community, necessitating vigilance and proactive measures within the community.'Even if the character formed in a democratic society is righteous, if those citizens come under the rule of wrongful authority, could they too not be free from humanity's barbarism and inhumane attitudes?' (Kim 2013, p. 151). 6 Before community psychology could demonstrate its influence, Lewin articulated the following position regarding social psychology: that social psychology better demonstrates what is needed than either psychology or sociology alone.Thus, it is necessary to overcome challenges and strive for continuous development.In this process, science should be dealt with in the realm of problems rather than the realm of data, and different problem domains require the language world of different entities and principles, with these disciplines being related in the universality of the same data (Lewin 1987, p. 173).7 (Nordell 2022, pp. 185-221); Meditation that encourages looking inward may seem ill-suited for police trained to respond closely to external events.However, the research showed that in 2020, more than 25% of police officers suffered from depression, PTSD, suicidal tendencies, and mental disorders, and in 2019, more police committed suicide than were killed in the line of duty.The chronic stress faced by the American police became a burden on themselves and the communities they were sworn to protect.Yet, the practice of mindfulness led to significant changes.One example is the police department in Bend, Oregon, where mindfulness over time brought about changes in the officers and the community.Despite initial skepticism, significant shifts occurred when mindfulness was practiced, including reductions in injuries, medical costs, and improvements in performance metrics.Complaints relative to reports decreased by 12% over six years since 2012, and there was a decrease in the use of force by the police.Moreover, compared to 2012, the number of times force was used in relation to all reported calls also decreased by 40% in 2019.Since mindfulness initially focused on personal benefits without exploring its implications for interpersonal relationships or social practice, there is a need for active utilization in community psychology moving forward.8 Neuroscientist Kang's research demonstrates these characteristics of Buddhist meditation through experiments.Her work with Buddhist monks showed that meditation could improve the capacity to consider and care for the inner experiences of others (Nordell 2022, pp. 196-97); Regarding this issue, we can also refer to the following sources: (Kang et al. 2013, pp. 1-8); (Kang 2018, pp. 115-19); (Kang andFalk 2020, pp. 1378-89).9 This term, however, does not refer to the specific 'Humanistic Buddhism' (人間佛敎) developed by Masters Ren Shun (印順, 1906-2005) and Hsing Yun (星雲, 1927-2023), based on Taixu (太虛, 1889-1947)'s 'Life Buddhism' (人生佛敎).Instead, it denotes a broader, general Buddhism, with a humanistic focus, particularly within the Mahāyāna tradition.Various scholars have interpreted Wonhyo's concept of One Mind and its relation to Hwajaeng in different ways: Jong-hong Park posits that 'gae-hap' (開合 opening and sealing) and 'jong-yo' (宗要, doctrine and essentials) serve as the logical foundation for Wonhyo's thought, representing the middle way between extremes; Ik-jin Ko employs a dialectical approach, associating Madhyamaka with the gate of Suchness and Yogācāra with the gate of arising and ceasing, suggesting that the Awakening of Faith synthesizes both; Gil-am Seok views One Mind from a Huayan perspective, focusing on its non-dualistic stance that transcends the relative true-false amalgamation of Tathāgatagarbha; Yeon-shik Choi interprets One Mind as the foundation and goal of Hwajaeng, emphasizing the essential sameness of all beings; Shigeki Sato aligns 'returning to the source of One Mind' and 'benefiting sentient beings' with Wonhyo's discourse, emphasizing the non-dualistic perspective; Sung-bae Park emphasizes the practical implications of Wonhyo's philosophy, suggesting that it offers valuable insights for contemporary conflict resolution and intercultural dialogue; Ki-young Lee emphasizes the comprehensive nature of One Mind and explains Hwajaeng in relation to emptiness and tathāgatagarbha theory; Young-seop Ko understands Hwajaeng as a skillful means or integrative logic premised on returning to One Mind, One-flavor (il-mi, 一味), and One enlightenment (il-gak, 一覺); Yu-jin Choi characterizes Wonhyo's approach as developing theories based on One Mind with the clear purpose of returning to its source while harmonizing various doctrinal theories that manifest in reality; Tae-won Park views the nature of Hwajaeng theory as 'harmonization through comprehensive inclusion' rather than 'syncretism as a reconciliation theory', emphasizing the characteristics of the causal series that establish perspectives; Jae-hyun Park views Hwajaeng as an approach to resolve the lack of communication between different sectors, finding clues to resolve contradictions in the 'comprehensive inclusion' based on the sentient beings' mind of arising and ceasing; Young-geun Jeong also considers One Mind as the theoretical foundation for educating and saving sentient beings, noting Wonhyo's presentation of Pure Land faith suitable for the capacities of ordinary people; Seong-cheol Kim, referencing the Critique of Inference, emphasizes that Wonhyo was not a logical absolutist who believed all Buddhist doctrines could be understood through inferential reasoning (Kim 2018, pp. 7-12); Byung-Wook Lee, and JongWook Kim focus on Wonhyo's Mind-only Pure Land thought (Lee 2015, pp. 29-58;Kim 2015, pp. 37-62); Taesoo Kim and Deok-sam Kim examine the characteristics and potential real-world applications of Wonhyo's method of harmonizing the four gates of Pure Land (Kim and Kim 2023, pp. 7-39).Taesoo Kim also analyzes Wonhyo's unique Hwajaeng method using complex tetralemma and explores its applicability to contemporary issues, synthesizing various methodologies including Deleuze's open dialectics (Kim 2018, pp. 1-283); Hyung-hyo Kim focuses on the 'method explained as the theory of sameness and difference (同異論) in the Treatise on the Vajrasamādhi Sūtra and the theory of identity and difference (一異論) in the Commentary of Awakening of Faith' (Kim 2006, pp. 264-74); Charles Muller focuses on Wonhyo's horizontal commentary approach, which reveres all Mahāyāna Sūtras without belonging to a specific sect (Muller 2015, pp. 9-44); Robert Buswell emphasizes Wonhyo's hermeneutical approach, which seeks to harmonize doctrinal disputes by revealing their underlying unity (Buswell 2017, pp. 131-60); Eun-su Cho highlights Wonhyo's unique interpretative strategies, particularly his use of the essence-function (體用) paradigm to reconcile seemingly conflicting doctrines (Cho 2013, pp. 39-54); Sumi Lee investigates the alayavijñana concept in Wonhyo's Commentary on the Awakening of Faith, as well as his middle way interpretation of buddha-nature and icchantika in the Mahāyāna Mahāparinirvāṇ a Sūtra from the perspective of Buddhist ethics (Lee 2019a, p. 536;2019b, pp. 231-48); Byung-hak Lee focuses on the social implications of Wonhyo's 'one-flavor practice' and 'enlightenment of others' concepts in his Treatise on the Vajrasamādhi Sūtra, examining Wonhyo's egalitarian popular Buddhist movement in opposition to aristocratic Buddhism (Lee 2006, pp. 195-228); Sem Vermeersch suggests that Wonhyo and his contemporaries' preoccupation with 'three in one' concepts was likely inspired, at least in part, by a concern for integrating opposing communities in the temporal world (Vermeersch 2015, pp. 95-117). 16 Trbh (2007), p. 124cd, 'niṣ pannas tasya pūrveṇ a sadā rahitatā tu yā…avikārapariniṣ pattyā sa pariniṣ pannaḥ …parikalpitena svabhāvena paratantrasya sadā rahitatā pariniṣ pannaḥ /rahitatā ca dharmatā…pariniṣ pannaś ca paratantradharmatety ataḥ paratantrāt pariniṣ panno nānyo nānanya iti boddhavyaḥ ' [The perfected (nature) is the constant absence of that (i.e., the dependent nature) in the previous (i.e., the imagined nature).…By the complete fulfillment without any change, that is the perfected (nature).…Theperfected (nature) is the constant absence of the imagined nature in the dependent nature.And this absence (rahitatā) is the true nature (dharmatā).And the perfected (nature) is the true nature of the dependent (nature); therefore, the perfected (nature) is neither other than the dependent (nature) nor the same as the dependent (nature), this should be understood].17 This other-dependent nature serves as the foundation for both the conceptualized and the originally pure reality.The conceptualized aspect views other-dependent nature from the standpoint of conceptual construction, while the original purity perspective reveals the intrinsic departure of other-dependent nature from its conceptualized constructs (Ahn 2005, pp. 61-90). 18 In a defiled state, other-dependent nature transforms into the nature of discrimination, and in a purified state, it becomes Tathātā.Thus, the relationship between the nature of discrimination and Tathata is a matter of how other-dependent nature, as a relationality, manifests itself (Yoo 2010, pp. 268-70).22 ≪大乘起信論別記≫ (Special Exposition on the Awakening of Faith in the Mahāyāna) (HBJ1, p. 683b), '言始覺者 卽此心體 隨無明緣動 作妄念 而以本覺熏習力故 稍有覺用 乃至究竟 還同本覺 是名始覺' (Initial enlightenment is precisely this mind-essence moving according to the conditions of ignorance, generating deluded thoughts, but due to the power of the habituation of original enlightenment, gradually there is the function of enlightenment, eventually becoming identical with original enlightenment; this is called initial enlightenment). 23 This concept significantly contributes to a wellness approach in psychotherapy, integrating spiritual well-being and health.Gonsiorek et al. (2009) discuss the ethical challenges and opportunities at the edge of incorporating spirituality and religion into psychotherapy, highlighting the importance of such integration.Gonsiorek et al. (2009, p. 387). 25 This appears to be influenced by the Laṅkāvatāra Sūtra. 3 4 Community psychologists strive to understand individuals within the social context. 5 10 This does not mean to label Wonhyo's philosophy as relational or humanistic Buddhism per se, but it indicates that such modern Buddhist characteristics are already present in Wonhyo's thoughts and practices, represented by the concepts of One Mind, Nonhindrance, and harmonizing disputes. 11
22,656.2
2024-07-16T00:00:00.000
[ "Psychology", "Philosophy" ]
Characterizing multipartite entanglement classes via higher-dimensional embeddings Witness operators are a central tool to detect entanglement or to distinguish among the different entanglement classes of multiparticle systems, which can be defined using stochastic local operations and classical communication (SLOCC). We show a one-to-one correspondence between general SLOCC witnesses and a class of entanglement witnesses in an extended Hilbert space. This relation can be used to derive SLOCC witnesses from criteria for full separability of quantum states; moreover, given SLOCC witnesses can be viewed as entanglement witnesses. As applications of this relation we discuss the calculation of overlaps between different SLOCC classes and the SLOCC classification in -dimensional systems. I. INTRODUCTION Entanglement is considered to be an important resource for applications in quantum information processing, making its characterization essential for the field [1,2].This includes its quantification and the development of tools to distinguish between different classes of entanglement.In general, entanglement is a resource if the parties are spatially separated and therefore the allowed operations are restricted to local operations assisted by classical communication (LOCC).It can neither be generated nor increased by LOCC transformations.Hence, convertibility via LOCC imposes a partial order on the entanglement of the states, and this order has been studied in detail [3][4][5][6][7][8]. For multipartite systems the classification via LOCC is, however, even for pure states very difficult, so one may consider a coarse grained classification.This can be done using the notion of stochastic local operations assisted by classical communication (SLOCC).By definition, an SLOCC class is formed by those pure states that can be converted into each other via local operations and classical communication with non-zero probability of success [9].SLOCC classes and their transformations have been characterized for small system sizes and symmetric states [9][10][11][12][13][14] and it has been shown that for multipartite systems there are finitely many SLOCC classes for tripartite systems with local dimensions of up to 2 × 3 × m and infinitely many otherwise [15]. Another important problem in entanglement theory is the separability problem, i.e., the task to decide whether a given quantum state is entangled or separable.Even though several criteria have been found which can decide separability in many instances [1,2,[16][17][18][19][20][21], the question whether a general multipartite mixed state is entangled or not, remains highly non-trivial.In fact, if the separability problem is formulated as a weak membership problem, it has been proven to be computationally NPhard [22,23] in the dimension of the system. One method to certify entanglement uses entanglement witnesses [2,24,25].An entanglement witness is a hermitian operator which has a positive expectation value for all separable states but gives a negative value for at least one entangled state.In opposition to other criteria, one main advantage of witnesses lies in the fact that no complete knowledge of the state is necessary and one just has to measure the witness observable.A special type of witnesses are projector-based witnesses of the form W = λ1 1 − |ψ ψ|, with λ being the maximal squared overlap between the entangled state |ψ and the set of all product states.Such projector based witnesses can also be used to distinguish between different SLOCC classes [26,27].In that case, λ is the maximal squared overlap between a given state |ψ in SLOCC class S |ψ and the set of all states within another SLOCC class S |ϕ .If a negative expectation value of W is measured, the considered state cannot be within the convex hull of S |ϕ or lower entanglement classes.In this context one should note that such statements require an understanding of the hierarchic structure of SLOCC classes, in the sense that some classes are contained in others [26,27]. In this paper we establish an one-to-one correspondence between general SLOCC witnesses for multipartite systems and a class of entanglement witnesses in a higher-dimensional system, built by two copies of the original one.This extends the results of Ref. [28] from the bipartite setting to the multipartite one and provides at the same time a simpler proof.The equivalence provides not only a deeper insight in the structure of SLOCC classes but enables to construct whole sets of entanglement witnesses for high-dimensional systems from the SLOCC structure of lower dimensions and vice versa.As such, from the solution for one problem, the solution to the related one readily follows. The paper is organized as follows.In Section II we briefly review the notion of SLOCC operations, entanglement witnesses and SLOCC witnesses.Section III states the main result of our work, the one-to-one correspondence among certain entanglement-and SLOCC witnesses.Furthermore, as optimizing the overlap λ between SLOCC classes is in general a hard problem and as such often not feasible analytically, a possible relaxation of the set of separable states to states with positive partial transpose is discussed.Section IV focuses on systems consisting of one qubit and two qutrits.Using numerical optimization, we find the maximal overlaps between all pairs of representative states of one SLOCC class and ar-bitrary states of another SLOCC class.The implications of these results for the hierarchic structure of SLOCC classes are then discussed.Section V concludes the paper and provides an outlook. II. PRELIMINARIES In this section the basic notions and definitions are briefly reviewed.We start with the notion of SLOCC equivalence of two states and then move on to the definition of entanglement witnesses.Finally, we will relate both concepts by recapitulating the notion of witness operators that are able to separate between different SLOCC classes. A. SLOCC classes As mentioned before two pure states are within the same SLOCC class if one can convert them into each other via LOCC with a non-zero probability of success.It can be shown that this implies the following definition [9]. That is, an SLOCC class or SLOCC orbit includes all states that are related by local, invertible operators.To extend this definition to mixed states one defines the class S |ψ with the representative |ψ as those mixed states that can be built as convex combinations of pure states within the SLOCC orbit of |ψ and of all pure states that can be approximated arbitrarily close by states within this orbit [26,27]. B. Entanglement witness An hermitean operator that can be used to distinguish between different classes of entanglement is called a witness operator.Recall that a mixed state that can be written as a convex combination of product states of the form |ψ s = |A |B • • • |N is called fully separable, and states which are not of this form are entangled [1,2].A witness operator that can certify entanglement has to fulfill the following properties [24,25]: Definition 2. A hermitean operator W is an entanglement witness if (i) tr( s W) ≥ 0 for all separable states s , (ii) tr( e W) < 0 for at least one state e , Hence, W witnesses the non-membership with respect to the convex set of separable states.If tr( W) < 0 for some state , then W is said to detect .A special class of witness operators are projector-based witnesses.Their construction is based on the maximal value λ of the squared overlap between a given entangled state |ψ with the set of all product states {|ψ s }.More precisely, W = λ1 1−|ψ ψ| with |ψ being some entangled state and λ = sup {|ψs } | ψ|ψ s | 2 is a valid entanglement witness [2]. C. SLOCC witness The concept of entanglement witnesses can be generalized to SLOCC witness.An SLOCC witness is an operator from which one can conclude that a state is not in the SLOCC class S |ψ [26,27]. Thus W detects for tr( W) < 0 states that are not within S |ψ .Note that it suffices to check positivity on all pure states |η in the set of mixed states S |ψ , as these form the extreme points of this set.Also if one considers |ψ = |A |B • • • |N , then the set of all SLOCC equivalent states are just all product states and the SLOCC witness is just a usual entanglement witness. One can construct an SLOCC witness via where λ denotes the maximal squared overlap between all pure states |η in the SLOCC class S |ψ and the representative state A special class of SLOCC witnesses are those verifying the Schmidt rank of a given bipartite state.Note that the Schmidt rank is the only SLOCC invariant for bipartite systems, and a one-to-one correspondence between Schmidt number witnesses and entanglement witnesses in an extended Hilbert space has been found [28].In the next section we will show that in fact there is a one-toone correspondence between SLOCC-and entanglement witnesses for arbitrary multipartite systems. III. ONE-TO-ONE CORRESPONDENCE BETWEEN SLOCC-AND ENTANGLEMENT WITNESS In the following we will show how to establish a one-toone correspondence between SLOCC witnesses and certain entanglement witnesses within a higher-dimensional Hilbert space for arbitrary multipartite systems.In order to improve readability, our method will be presented for the case of tripartite systems, however, the generalization to more parties is straightforward.Then, we will discuss one possibility to use this correspondence to derive SLOCC witnesses. A. The correspondence between the two witnesses Let us start with formulating the problem as follows: Consider the pure state |ψ , which is a representative state of the SLOCC class S |ψ .Then all pure states, |η , within the SLOCC orbit of |ψ can be reached by applying local invertible operators A, B and C, that is Here, one has to take care that |η is normalized; so, if considering general matrices A, B, C, one has to renormalize the state. The aim will be to maximize the overlap between a given state |ϕ and a pure state |η within S |ψ , sup which is the main step for constructing the projectorbased witness.Stated differently, the quantity of interest is the minimal value λ > 0, such that sup It can easily be seen that this is true if and only if holds.One can then define a witness operator W = λ1 1 − |ϕ ϕ| which, with the definition of |η from before, satisfies: Note that in the formulation of Eqs.(7,8) the normalization of |η = A ⊗ B ⊗ C |ψ is irrelevant, this trick has already been used in Ref. [29]. The key idea to establish the connection is the following: In order to prove that W is an SLOCC witness, one has to minimize in Eq. ( 7) over all matrices A, B, C, which do not have any constraint anymore.A matrix like A acting on the Hilbert space H A can be seen as a vector on the two-copy system H A1 ⊗H A2 .Then, the remaining optimization is the same as optimizing over all product states in the higher-dimensional system and requesting that the resulting value is always positive.Consequently, the SLOCC witness W corresponds to a usual witness W on the higher-dimensional system.More precisely, as stated in the following theorem, one can show that if Eq. ( 8) holds, then the operator W = W ⊗ |ψ * ψ * | is positive on all separable states |ξ sep and vice versa.Here and in the following * denotes complex conjugation in a product basis. Theorem 4. Consider the operator W on the tripartite space H = H A ⊗ H B ⊗ H C and the operator W = W ⊗ |ψ * ψ * | on the two-copy space H ⊗ H.Then, W is an SLOCC witness for the class S |ψ , if and only if the operator W is an entanglement witness with respect to the split where |ξ sep are product states within the two-copy system, that is they are of the form Proof.The "only if" part( "⇒") of the proof can be shown as follows: One can always write the witness operator W in its eigenbasis Moreover, it holds that (11) We consider a single summand in Eq. ( 10) and use the following representation of the SLOCC operations A, B, C and the state |ψ : We write where the indices 1 and 2 indicate now the copies of the system and we use ket-vectors like |Y 12 = ij Y ij |ij on the two-copy Hilbert space of each particle Y ∈ {A, B, C}.In the same way we obtain: Thus Eq. ( 10) can be written as ) So far, the vectors |Y 12 with Y ∈ {A, B, C} are not entirely arbitrary, as the operators A, B and C are invertible.However, as any non-invertible matrix can be approximated arbitrarily well by invertible matrices and the expression under consideration is continuous, the positivity condition in Eq. ( 14) holds for any vectors |Y 12 .Let us finally note that it is straightforward to see that if W is not positive semidefinite then W is not positive semidefinite as well.This completes the "only if" part of the proof. In order to start the discussion, we first note that statement of the theorem clearly holds for any number of parties, the proof can directly be generalized.Also, we note that the complex conjugation |ψ * is relevant, as there are instances where |ψ * and |ψ are not equivalent under SLOCC [7]. Second, we compare the theorem with known results.The theorem presents a generalization of the main result from Ref. [28] from the bipartite to the multipartite case.The SLOCC classes in the bipartite case are characterized by the Schmidt number and the Schmidt witnesses considered in Ref. [28] are just the SLOCC witnesses for the bipartite case.A similar connection for the special case of bipartite witnesses for Schmidt number one has also been discussed in Ref. [30].Furthermore, for the multipartite case, where the Schmidt number classification is a coarse graining of the SLOCC classification, a connection between Schmidt witnesses and entanglement witnesses has been proved in Ref. [31].This connection, however, is not equivalent to ours, as the dimension of the enlarged space in Ref. [31] is in general larger. Third, Theorem 4 provides the possibility to consider the problem of maximizing the overlap of two states under SLOCC from a different perspective.That is, by solving the problem of finding the minimal value of λ, for which W = (λ1 1 − |ϕ ϕ|) ⊗ |ψ * ψ * | is an entanglement witness for full separability one can determine the value of the maximal overlap between |ϕ and |ψ under SLOCC operations.In order to provide a concrete application of Theorem 4, we derived the maximal squared overlap between an N -qubit GHZ state and the SLOCC class of the N -qubit W state using the relation derived above in the Appendix.The resulting value is 3/4 for N = 3 (numerically already known from Ref. [26]) and 1/2 for N ≥ 4 (for four-qubit states this value has been already found in Ref. [27]).It should be noted that there is an assymmetry: While the SLOCC class of the three-qubit W state can approximate the GHZ state only to a certain degree, one can find arbitrarily close to the W state a state in the SLOCC orbit of the GHZ state [26].Finally, our result reflects that the separability problem as well as the problem of deciding whether two tripartite states are within the same SLOCC class are both computationally highly non-trivial.In fact, they were shown to be NP-hard [22,23,32]. In the following section we will discuss a relaxation of witness condition to be positive on all separable states.Instead one can consider the condition that W should be positive on states having a positive partial transpose (PPT) for any bipartition. B. Using entanglement criteria for the witness construction In general, starting from it can be very difficult to find an analytical solution for the minimal value of λ such that the expectation value of W = (λ1 1 − |ϕ ϕ|) ⊗ |ψ * ψ * | is positive on all product states |ξ sep .To circumvent this problem, one can try to broaden the restrictions on the set of states on which W is positive in a way that the new set naturally includes the original set of separable states. One potential way to do that uses the the criterion of the positivity of the partial transpose (PPT), as the set of separable states is a subset of the states which are PPT [16].More precisely, one can demand that W is positive on the set of states which are PPT with respect to all subsystems in the considered bipartite splittings, i.e., tr( A12B12C12 W) ≥ 0 for all A12B12C12 with: Although the set of PPT states is known to include PPT entangled states, this relaxation of the initial conditions offers an advantage, as we are able to formulate the problem of determining λ as a semi-definite program (SDP)and as such provides a way for an exact result [33]. For a given λ one can consider the optimization problem minimize: tr( W) subject to: ≥ 0, Ti ≥ 0 for i = A, B, C, Such optimization problems can be solved with standard computer algebra systems.If the obtained value in Eq. ( 18) is non-negative, the initial operator W = λ1 1 − |ϕ ϕ| was an SLOCC witness, so λ is an upper bound on the maximal overlap. To give an example, one may use this optimization for obtaining an upper bound on the overlap between the four-qubit cluster state and the SLOCC orbit of the four-qubit GHZ state or vice versa.In all the interesting examples, however, one obtains only the trivial bound λ = 1.This finds a natural explanation: If λ is the exact maximal overlap, then the witness W detects some entangled states which are PPT with respect to any bipartition.Consequently, relaxing the positivity on separable states to positivity on PPT state is a rather wasteful approximation in our case, and the resulting estimate on λ is also wasteful. The key observation is that given two pure bipartite states, |φ and |ψ * in a d 1 × d 1 and d 2 × d 2 system, respectively, the total state as a state on a d 1 d 2 × d 1 d 2 -system is PPT, but typically entangled.This holds for nearly arbitrary choices for |φ and |ψ * and small values of p [34].Note that states of the form given in Eq. ( 19) lead to tr[(λ1 < 0 for any λ < 1, so they are detected by the witness W. Hence, the relaxation to states that are PPT does, for general |φ and |ψ not allow to determine possible non-trivial values of λ for which W is an entanglement witness. We mention that in Ref. [34] operators of the form (λ1 1−|φ φ|)⊗(|ψ * ψ * |) with an appropriate choice of λ have been shown to be bipartite entanglement witnesses for the case where the Schmidt rank of |ψ * is smaller than the Schmidt rank of |φ for the considered bipartite splitting.This can be easily understood using our result and the results of Ref. [28], as in this case |φ and |ψ are in different bipartite SLOCC classes and |φ cannot be approximated arbitrarily close by a state in the SLOCC class of |ψ . Finally, we add that considering other relaxations of the set of separable states can provide a way to estimate the maximal SLOCC overlap using an SDP.Here, other positive maps besides the transposition, such as the Choi map [1], or the SDP approach of Ref. [20] seems feasible. IV. SLOCC OVERLAPS FOR 2 × 3 × 3 SYSTEMS Systems consisting of one qubit, one qutrit and one system of arbitrary dimension mark the last cases, which still have a finite number of SLOCC classes [15], for general systems the number of SLOCC classes is infinite [10].For one qubit and two qutrits there are 17 different classes with 12 of these being truly tripartite entangled and six of them containing entangled states with maximal Schmidt rank across the bipartitions [13,15].Finding the maximal overlap of the representative states of the different classes not only indicates towards an hierarchy among them, but, as shown in Section III, gives insight in the entanglement properties of states in an enlarged two-copy system.In fact, one can then construct entanglement witnessrs, W which detect entanglement within states of dimension 4 × 6 × 6.Thus, for all pairs of representatives and SLOCC classes where λ < 1 one can construct a specific W which, as discussed above, typically also detects PPT entangled states. The unnormalized representative states of the fully en-tangled SLOCC classes within a 2 × 3 × 3 system are [15]: One can compute the overlap between one of these states and the SLOCC orbit of another state via direct optimization.As for the GHZ class and the W state, it can happen that one class can approximate one state arbitrarily well, so we set the overlap to one, if the numerical obtained value approximates this with a numerical precision of 10 −12 .Note that an exact value of one is impossible, as the SLOCC classes are proven to be different.The values of the numerical maximization of the SLOCC overlap for the different SLOCC classes with respect to the representative states from above is given in Table I.They should be interpreted as follows: For the overlaps between |ψ This also has consequences for the classification of mixed states, see Fig. 1.For a mixed state, one may ask whether it can be written as a convex combination of pure states within some SLOCC class.If a state can be written as such a convex combination of states from the orbit of |ψ 7 , it can also be written with states from the orbit of |ψ 6 , as the latter can approximate the former arbitrarily well.Consequently, there is an inclusion relation for the mixed states, as depicted in Fig. 1. V. CONCLUSIONS For arbitrary numbers of parties and local dimensions we showed a one-to-one correspondence between an operator W able to distinguish between different SLOCC classes of a system and another operator W that detects entanglement in a two-copy system.This correspondence thereby enables us to directly transfer a solution for one problem to the other.Though the relaxation to PPT states in order to construct the entanglement witness did not prove to be helpful for reasons stated in Section III, it very well might be that other possible relaxations on the set of separable states will give more insight and a good approximation for an upper bound on the maximal overlap.As an concrete application of the presented relation we derived the maximal overlap between the N -qubit GHZ state and states within the N -qubit W class.The calculations in Section IV for the qubit-qutrit-qutrit system do not only indicate a hierarchy among the SLOCC classes but also provides us with the option to construct a whole set of entanglement witnesses for the doubled system of dimensions 4 × 6 × 6. and show that it is an entanglement witness (for 2N -qubit states) with respect to the splitting ( and therefore the maximal squared overlap is given by λ C N .Before considering the problem of finding the range of λ N for which WN is an entanglement witness let us first present a parametrization of states in the W-class that will be convenient for our purpose and then relate it to the parametrization of product states that have to be considered.It is well known that any state in the W-class can be written as i U i (x 0 |00 . . .0 +x 1 |10 . . .0 +x 2 |010 . . .0 +. ..+xN −1 |0 . . .010 +x N |0 . . .01 ) with x 0 ≥ 0, x i > 0 for i ∈ {1, . . ., N } and U i unitary [9].Note that we do not impose that the states are normalized.Equivalently, one can write it as For the local unitaries on the qubits we will use the parametrization (1, e iδ ) and α i , β i , γ i ∈ R. In order to simplify our argumentation we will use the symmetry that i U ph (δ) |W n = e iδ |W n and choose β N = 0, β i = β i − β N for i ∈ {1, . . ., N − 2} and x j = x j e −iβ N for j = 0, N − 1.Furthermore, using for the GHZ state the symmetry that U ph (δ 1 ) ⊗ U ph (δ 2 ) ⊗ . . .⊗ U ph (δ N −2 ) ⊗ U ph (− i∈I0 δ i ) ⊗ U ph (δ N ) |GHZ N = |GHZ N where here and in the following I 0 = {1, 2, . . ., N − 2, N } one can easily see that when computing the maximal SLOCC overlap between the GHZ state and a W class state one can equivalently choose γ i = 0 for i ∈ I 0 and γ N −1 = N i=1 γ i .We will now make use of the fact that η| One obtains for the respective terms of wN that TABLE I . This table shows the numerical values for the maximal squared overlap between |ψi (column) and the SLOCC orbit of |ψj (row).See text for further details.Hierarchic structure of SLOCC classes for mixed states within a qubit-qutrit-qutrit system.If one pure state orbit of class |ψi can be approximated by another SLOCC orbit |ψj arbitrary well, the corresponding mixed states in class i are included in the mixed states in class j.As can be seen from TableI, |ψ15 is the most powerful class in the sense that any other state |ψi can be reached from |ψ15 via SLOCC operations with arbitrary high accuracy.
6,011.6
2019-01-25T00:00:00.000
[ "Physics" ]
Design of Dual-Mode Substrate Integrated Waveguide Band-Pass Filters Three dual-mode band-pass filters are presented in the present paper. The first filter is realized by dual-mode substrate integrated waveguide (SIW) cavity; the second is based on the integration of SIW cavity with electromagnetic band gap (EBG); and the third is based on the integration of SIW cavity with complementary split ring resonator (CSRR). The dual-mode SIW cavity is designed to have a fractional bandwidth of 4.95% at the midband frequency of 9.08 GHz; the proposed EBGSIW resonator operates at 9.12 GHz with a bandwidth of 4.38% and the CSRR-SIW resonator operates at 8.66 GHz with a bandwidth of 2.54%. The proposed filters have the high Q-factors and generate a transmission zero in upper stopband, and these by the use of Rogers RT/duriod 5880 (tm). Introduction Rectangular waveguide filters are widely used in RF-Microwave industry, due to its characteristic properties of low losses, and high Q-factor.However, their integrations with planar structures in electronic systems are very difficult and their fabrications are expensive. To resolve these problems, a new technology is implemented, called the substrate integrated waveguide (SIW).The SIW responds to these constraints in the design of microwave components by taking the advantages of low radiation loss, high power handling and high Q-factor. The SIW is formed by two solid conductor planes, separated by a dielectric substrate, with conductor side-walls emulated by rows of metalized through-plated via [1]- [10]. On the other hand, the metamaterials (electromagnetic band gaps (EBGs) and complementary split-ring resonators (CSRRs)) are used for manipulating electromagnetic waves and the unusual properties.The application of metamaterials has allowed a great improvement the performances and the size reduction in planar microwave applications, filter, and antenna [11]- [20]. In this paper, three dual-mode band-pass filters are proposed.These filters have been simulated in a commercial software package HFSS™, and thus a comparison was made between the proposed filters and several previous filters reported in [21]- [24]. Design of SIW Cavity and Proposed Transitions The SIW cavity enables propagation of the mode (TE m0p ), its parameters necessary are the length (L SIW ), the width (W SIW ), the diameter D of the metallic via hole and the spacing P between the holes, which are expressed by the resonant frequency of rectangular cavity ( 0 ), because the electrical behavior of SIW cavity is very close to rectangular cavity filled with the same dielectric (ε r ) of length (L eff ) and width (W eff ) as shown in Figure 1.The expression for the resonant frequency of SIW cavity: The size of SIW cavity is designed from the empirical Equations ( 1) and (2) These Equations are valid for P < 4D and P < λ 0 (ε r /2) 1/2 with λ 0 the space wavelength [1] The microstrip transition allows integration SIW to the planar structures (the planar transmission lines).Figure 2 shows the geometric structure of the proposed tapered transition.The parameters of the transition (L T and W T ) and the microstrip line (W M ) are expressed from the relations in [10]. Design of Meta Materials (EBG and CSRR) The electromagnetic band gap structures are presented as the complex periodic structures.The electromagnetic band gap materials are used for many applications and especially in the frequency filtering, because of their electromagnetic properties, to create band gaps in the electromagnetic spectrum.The application of EBG has allowed a great improvement the performances of numerous devices in telecommunication systems.shows the geometric structure of the proposed S-shape EBG. Where M is the length of EBG, T is the width of EBG and V is the strip-length of EBG.The EBG is founded on the Bragg condition [11]- [15]. On the other, the CSRR is employed as LC resonator, thus the resonant frequency is obtained by the geometric parameters of CSRR.These specific properties are adaptable for many applications and especially the filters [16]- [20].Figure 4 shows the geometric structure of the proposed CSRR cell.With W is the ring length of CSRR, G is the gap length, A is the side-length of CSRR and F is the width which separates two rings. Results The dual-mode SIW cavity uses the substrate of Rogers RT/duriod 5880 (tm) (ε r = 2.2, h = 0.508 mm and tanδ = 0.0009), with D = 0.6 mm and P = 1 mm.By considering two orthogonal modes TE 102 and TE 301 , the size of dual-mode SIW cavity is designed from the Equation (4).8 3 The initial dimensions of dual-mode SIW cavity with tapered transitions have been optimized by software package HFSS™.The detailed dimensions are decided as: W SIW = 37.4 mm, L SIW = 22 mm, W M = 1.568 mm, W T = 8 mm and L T = 14.22 mm. Figure 5 shows the geometric structure of dual-mode SIW cavity with tapered transitions. The simulation results for S-parameters of the dual-mode SIW cavity with tapered transitions are shown below in Figure 6. Simulated results presented in Figure 6 show that the dual-mode SIW cavity with tapered transitions has the 3 dB bandwidth is approximately 0.45 GHz (from 8.855 to 9.305 GHz) centered at 9.08 GHz.The insertion loss is 0.43 dB and the return loss is better than 20 dB across the band of interest.Moreover, a transmission zeros at 9.38 GHz and the Q-factor is 414. On the other hand, the integration of SIW cavity with EBG or CSRR allows the creation a dual-mode band-pass filter.The SIW is designed on substrate of Rogers RT/duriod 5880 (tm) (ε r = 2.2, h = 0.508 mm and tanδ = 0.0009), with D = 0.6 mm and P = 1 mm.For analyzing the property of S-shaped EBG, one-cell EBG is etched on the top side of SIW.The dimensions of the one-cell EBG are: M = 2 mm, T = 5 mm, V = 0.9 mm. Figure 8 shows the configuration of one-cell SIW-EBG.The simulation results for S 21 parameter of standard SIW and one-cell SIW-EBG are shown below in Figure 9. As shown in Figure 9, the structure of one-cell SIW-EBG has allowed producing a stopband at about 13.19 GHz. On the other hand, Figure 10 shows the geometric structure of the proposed SIW resonator.The TE 101mode-based SIW cavity is presented by a square cavity with W SIW = L SIW = 15 mm and the tapered transitions with the same dimensions W T = 5.76 mm, L T = 14.22 mm and W M = 1.568 mm. The simulation results for S-parameters of the proposed SIW resonator are shown below in Figure 11.Simulated results presented in Figure 11 show that the SIW resonator has the 3 dB bandwidth is approximately 0.34 GHz at the midband frequency of 9.11 GHz, The insertion loss is 0.371 dB and the return loss is better than 20 dB across the band of interest.Moreover, the Q-factor is 641. To make the characteristics of proposed SIW resonator clear, the width (W SIW ) is discussed in details.The simulation results for S 21 parameter of the SIW resonator with different values of W SIW are shown below in Figure 12.Table 1 shows the simulation results obtained.As illustrated in Table 1, when the width (W SIW ) of SIW resonator increases the center frequency and the fractional bandwidth are decreasing, while the insertion loss becomes higher. After studying the characteristics of SIW cavity and EBG, an EBG-SIW resonator is designed on the same substrate of Rogers RT/duriod 5880 (tm) (ε r = 2.2, h = 0.508 mm and tanδ = 0.0009).Figure 13 shows the geometric structure of the proposed EBG-SIW resonator.Its physical parameters are provided in Table 2.The simulation results for S-parameters of the proposed EBG-SIW resonator are shown below in Figure 14. Simulated results presented in Figure 14 show that the EBG-SIW resonator has the 3 dB bandwidth is approximately 0.4 GHz (from 8.92 to 9.32 GHz) centered at 9.12 GHz.The insertion loss is 1.18 dB and the return loss is better than 15 dB across the band of interest.Moreover, a transmission zeros at 12 GHz and the Q-factor is 179. On the other hand, the characteristics of CSRR are analyzed, by using a simple model as shown in Figure 15, this model is formed by etching a CSRR on the top side of SIW. In order to make the characteristics of proposed CSRR clear, its side-length (A) is discussed in details, with the same conditions as ε r = 2.2, h = 0.508 mm, W T = 5.76 mm, W M = 1.568 mm, L T = 14.22 mm, W SIW = 15 mm, D = 0.6 mm, P = 1 mm.The influence of the side-length (A) is simulated and shown in Figure 16.The simulation results are shown in Table 3. Simulated results presented in Table 3 show that the resonant frequency decreases thus the attenuation increases when the side-length of CSRR becomes increasingly larger. After studying the characteristics of CSRR, a CSRR-SIW resonator is designed on the same substrate of Rogers RT/duriod 5880 (tm) (ε r = 2.2, h = 0.508 mm and tanδ = 0.0009).Simulated results presented in Figure 18 show that the proposed CSRR-SIW resonator has the 3 dB bandwidth is approximately 0.22 GHz (from 8.55 to 8.77 GHz) centered at 8.66 GHz.The insertion loss is 0.55 dB and the return loss is better than 20 dB across the band of interest.Moreover, a transmission zeros at 11.73 GHz and the Q-factor is 640. In order to verify the characteristics of proposed filters, some comparisons between the proposed filters and several previous filters reported in the references are summarized in Table 5.According to the comparisons, the proposed filters have the advantage of high Q-factor compared to the filters in [21]- [24]. Conclusions In this paper, three dual-mode band-pass filters are proposed.The dual-mode SIW cavity filter has a center frequency of 9.08 GHz with a bandwidth of 4.95%.The insertion loss is 0.43 dB and the return loss is better than 20 dB across the band of interest.In addition, a transmission zero at 9.38 GHz and Q-factor is 414.The EBG-SIW resonator has a center frequency of 9.12 GHz with a bandwidth of 4.38%.The insertion loss is 1.18 dB and the return loss is better than 15 dB across the band of interest.In addition, a transmission zero at 12 GHz and Q-factor is 179.The CSRR-SIW resonator has a center frequency of 8.66 GHz with a bandwidth of 2.54%.The insertion loss is 0.55 dB and the return loss is better than 20 dB across the band of interest.In addition, a transmission zero at 11.73 GHz and Q-factor is 640.The simulation processes of the structures are done by using HFSS software.The design methods are discussed and presented.The proposed filters have a small size, high Q-factor and low loss, and can be directly integrated with other circuits without any additional mechanical assembling tuning.Additionally, these filters are easily scalable over microwave and millimeter frequency ranges. Figure 7 shows the proposed configuration of SIW-microstrip line with tapered transitions.The parameters of the proposed structure are optimized by software package HFSS™.The final desired dimensions are: W SIW = 15 mm, W M = 1.568 mm, L T = 14.22 mm and W T = 5.76 mm. Figure 5 . Figure 5. Configuration of dual-mode SIW cavity with tapered transitions. Figure 6 . Figure 6.Simulated S-parameters of dual-mode SIW cavity with tapered transitions. Figure 9 . Figure 9. Simulated transmission coefficients S 21 of standard SIW and one-cell SIW-EBG. Figure 12 . Figure 12.Simulated transmission coefficients S 21 of SIW resonator with different values of W SIW , where L SIW = 15 mm, W T = 5.76 mm, L T = 14.22 mm, W M = 1.568 mm. Figure 17 shows the geometric structure of CSRR-SIW resonator.Its physical parameters are provided in Table 4.The simulation results for S-parameters of the proposed CSRR-SIW resonator are shown below in Figure 18. Figure 16 .Table 3 . Figure 16.Simulated transmission coefficients S 21 of one-cell SIW-CSRR with different values of A, where G = 0.3 mm, F = 0.3 mm, W = 0.3 mm.Table 3.The simulated results of one-cell SIW-CSRR with different values of the side-length (A). Table 1 . The simulated results of SIW resonator with different values of the width (W SIW ). Table 5 . Performance comparison among published dual-mode band pass filters and the proposed filters.
2,870
2015-12-04T00:00:00.000
[ "Engineering", "Physics" ]
Lignin-based epoxy composite vitrimers with light-controlled remoldability Vitrimers open new possibilities in the reprocess of epoxy and other thermosets. However, direct heating is not practical on many occasions, and the waste vitrimers would cause great harm to the environment. In this work, we propose to use kraft lignin (KL) to fabricate vitrimer with reprocessability and environmental friendliness. The lignin-based epoxy vitrimer was fabricated by blending epoxy-modified KL and poly(ethylene glycol) bis(carboxymethyl) ether (PEG-DCM). The obtained lignin-based epoxy vitrimer (EML/PEG-DCM) showed good light-to-heat capability. Under the infrared radiation (808 nm, 1 W cm−2) for only 30 s, the surface temperature of EML/PEG-DCM was over ∼148 °C, and reached the maximum at ∼231 °C for 5 min. This good light-to-heat effect can activate the dynamic 3D cross-linking networks and repair the vitrimer. The energy consumption of the light-controlled remolding process is only one-thousandth of the conventional hot-press. This study not only helps to explore the natural characteristics of lignins, promoting their functional and intelligent utilization, but also provides a new raw material platform for the development of green vitrimer materials. Introduction Thermosetting polymers such as cured epoxy resin are permanently cross-linked materials, which present excellent electrical insulation, high adhesion, dimensional stability, and corrosion resistance. In the last few decades, they have been widely used as coatings [1][2][3], adhesives, electronic, and electrical [4][5][6] materials in many areas, including machinery and aviation, chemical industry, construction, and automobile. Nonetheless, due to its insoluble and infusible polymer network, conventional thermoset epoxy resin cannot be melted and re-shaped after it is cured, which usually prevents recycling. The concept of vitrimer, introduced by Leibler's team in 2011, makes it possible to reprocess or recycled epoxy resin and other thermosetting polymers [7][8][9]. Profiting from the existence of exchangeable bonds in the crosslinking network, vitrimers not only have similar properties to traditional thermosetting epoxy resins at low temperature but also can be reprocessed (reshaping, welding, recycling) at high temperature. With both thermosetting and thermoplastic properties, vitrimer has become a popular candidate for a variety of functional materials. For example, Wu et al. prepared fully biobased vitrimers with good thermal stability and mechanical properties that could be used as an adhesive [10] and carbon fiber composites with good recycling property [11]. Gao and co-workers combined hydrogen bonds and exchangeable β-hydroxyl esters into acrylate vitrimers, which demonstrated a new strategy for developing a kind of mechanically robust and reprocessable 3D printing thermosets [12]. Niu et al. presented a self-repairable and visualized interactive human motion detection sensor by integrating the vitrimer elastomer with photonic crystals [13]. Numerous studies to improve the reprocessability of vitrimers have been reported; most of them were synthesized from petroleum-based materials and required hotcompaction processes. With the increased awareness in the end of life recyclability, convenient operation, and energy consumption, the fabrication of a green vitrimer which can be remolded accurately and easily still remains a challenge. In recent years, the development of polymers from renewable resources has grown incessantly both in academia and in the industry [10,[14][15][16][17][18]. Lignin is the largest renewable source of aromatic building blocks in nature [19][20][21][22] and has significant potential to serve as starting material for the production of bulk or functionalized aromatic compounds to offer suitable alternatives to the universally used, petroleum-derived BTX (benzene, toluene, and xylene) [23][24][25][26][27][28]. There are a large number of aromatic rings and conjugated functional groups inside the molecular structure of lignin, which allow the formation of strong conjugation and π-π molecular interactions among lignin molecules [26,[29][30][31], endowing lignin with unique optical properties including aggregation-induced emission, UV absorbance and great potential for sustainable photothermal conversion [32,33]. Zhang et al. obtained a ligninbased photoresponsive actuator that can achieve up to 18% light-driven contraction under loading within 3 s and was successfully applied to power a thermoelectric generator [25]. Inspired by the interaction of the conjugated structure in melanin, Chen et al. used lignin nanoparticles to create a solar-powered thermoelectric generator that was able to drive a motor [26]. Mika et al. report for the first time a onepot catalyst-free preparation of lignin-based vitrimers, and the mechanical properties of the vitrimers can be widely tuned in a facile way [34]. Xu et al. developed an information encryption device using shape memory cellulose acetate as a matrix, and lignin as a photothermal agent [27]. Based on the green character and photothermal conversion capability of lignin, here, we synthesized a lignin-based epoxy vitrimer and proposed a light-to-heat approach for light-controlled remolding of the vitrimer. Lignin has abundant functional groups, such as carboxylic, methoxy, aliphatic and phenolic hydroxyls, and carbonyl groups [35], which give it great potential for chemical modification [36]. In this work, lignin was modified by epoxy (epoxy modified lignin, EML) and mixed with poly(ethylene glycol) bis(carboxymethyl) ether (PEG-DCM). The carboxylic acid groups in PEG-DCM reacted with epoxy directly to form tridimensional networks of lignin-based epoxy vitrimer (EML/PEG-DCM). Under the photothermal conversion effect of lignin, the dynamic networks can be activated under an 808-nm infrared laser without the addition of common expensive photothermal materials. During the curing process, the carboxyl groups in PEG-DCM attacked the epoxy on EML to form ester bonds with the generation of additional hydroxyl groups, resulting in the formation of crosslinked networks. In the presence of zinc acetate catalysis, transesterification reactions took place at elevated temperatures to induce the topological rearrangements of networks. The chemical structures and mechanical properties of the resulting lignin-based epoxy vitrimers were systematically analyzed using Fourier transform infrared spectroscopy, nuclear magnetic resonance spectroscopy, tensile strength, and elongation at break. Moreover, we also achieved lightcontrolled remold processes of the workpieces and evaluated their energy consumption and mechanical properties. Preparation of lignin-based epoxy vitrimers The lignin-based epoxy vitrimers were prepared by reaction of EML and PEG-DCM under the catalysis of zinc acetate. Firstly, EML, zinc acetate, and PEG-DCM were evenly mixed at room temperature to get a brown sticky mixture. Then, the mixture was poured into a standard dumbbell Teflon mold and cured in a convection oven at 120 °C (or 160 °C) for 4 h (or 6 h). For all samples, the molar ratio of epoxy/COOH was set as 1/1. Characterization Fourier transform infrared spectroscopy (FTIR) analysis was measured on an FTIR-650 spectrometer. The sample was scanned 32 times from 4000 cm −1 to 400 cm −1 with a resolution of 1.5 cm −1 . The epoxy value (mol /100 g) was determined by the acid-acetone titration method. The HCl-acetone solution was obtained by mixing hydrochloric acid and acetone at a volume ratio of 1:40 in a glass vial at room temperature. 1.0 g (accurate to 0.0002 g) of EML sample was accurately weighed and added in a 250 mL conical flask with 25 mL HCl-acetone solution added and reacted with the sample in the dark at room temperature for 1 h. Then, the solution was titrated with 0.1 mol/L NaOH standard solution. Until the pH value of the system reached and kept stable at 7.0, the consumption volume of the NaOH standard solution was recorded as V 1 . At the same time, two blank titrations were carried out according to the above conditions, and the consumption volume of the NaOH standard solution was recorded as V 0 . The epoxy value (EV) can be calculated according to Eq. (1): where V 0 and V 1 represent the consumptions of NaOH standard solution in the blank and the experimental part, respectively. W represents the weight of the sample. Thermogravimetric analysis was measured on an STA 7500 thermogravimetric (TG) analyzer (TA instruments, America). Samples (about 3 ~ 5 mg) were heated from room temperature to 800 °C at the heating rate of 10 °C/min in a nitrogen atmosphere (100 mL/min). Proton nuclear magnetic resonance ( 1 H-NMR) spectra were recorded on a Bruker Avance III 400 spectrometer (Bruker, Germany) at room temperature for 8 scans. For preparing the NMR sample, 5 mg of lignin samples were dissolved in 0.5 mL of DMSO-d 6 . Mechanical properties were tested at room temperature with an LD23.503 testing machine (Lishi Instruments Co. Ltd, Shanghai), and all measurements were made at an extension rate of 5 mm/min. All samples are molded in standard dumbbell molds with a length of 35 mm, a narrow section width of 2 mm, and a thickness of 1 mm. At least three replicate experiments were performed. Light-healing properties were performed on an LD23.503 testing machine (Lishi Instruments Co. Ltd, Shanghai). The specimens were completely cut perpendicular to the tensile direction and then jointed the fracture surfaces. The healing was performed under 808 nm, 1W/cm −2 infrared laser irradiation for 20 min, and then the test was performed. The tensile rate at room temperature is 5 mm/min. Photothermal characterizations were tested at room temperature by a HIKMICRO H21PRO thermal imaging camera (Hangzhou Hikvision Digital Technology Co. Ltd, Hangzhou) and an 808-nm infrared laser with a power density of 1.00 W/cm −2 . The photothermal conversion efficiency (η) can be calculated according to Eq. (2) [37]: where Q is the heat generation rate of the sample under irradiation and P is the power of the light. The calculation of Q is critical as P is constant at a given light power. At a specific point of the heating process, Eq. (3) is satisfied: where h represents the heat transfer coefficient and A is the area of heat transfer. ΔT represents the difference between the temperature of our E-P600 and the ambient temperature at a certain time t. Cp is the specific heat of the sample. In this system, the Cp differs as the temperature change, which could be determined by the method of sapphire with DSC is defined as dt/dT, which can be solved through the differential operation of the inverse function of the temperature-rising curve. g(Cp) is defined as Eq. (4): According to the Q, the photothermal efficiency of E-P600 is 86.20% under light radiation at 1 W. (Due to the small sample area, the calculation here ignores the heat loss of heat transfer, and the photothermal efficiency obtained is smaller than the actual one). The calculation formula of energy utilization coefficient (Eq. 5) is defined as healing efficiency to energy used. where J is the energy utilization coefficient, H is the healing efficiency, and J is the energy expended to repair the sample. Differential scanning calorimetry (NETZSCH DSC214) was utilized to determine the specific heat of E-P600. Heating cycles between 0 and 200 ℃ were recorded with a heating speed of 15 ℃/min (Fig. S1). The angle of contact was were tested at room temperature by a Contact Angle tester (FIBRO System AB, SWEDEN). Results and discussion In this work, based on the photothermal effect of lignin [32], a new kind of lignin-based epoxy vitrimers was synthesized to achieve light-controlled remoldability. The synthetic routes, curing, and network topological rearrangements to the lignin-based epoxy vitrimers are shown in Fig. 1. First, epoxy-modified lignin was produced by KL reacted with epichlorohydrin. Excess epichlorohydrin was used as a solvent to reduce the viscosity and hydrolysable chlorine content in epoxy prepolymers. The epoxide equivalent per weight of EML was measured by the titration method to be 0.55 mol/100 g. Subsequently, EML and PEG-DCM were blended homogeneously and cured at 120 °C for 4 h (or 160 °C for 6 h), the carboxylic acid opened epoxies while hydroxyls were generated at the same time. According to the different molecular weights of PEG-DCM, the corresponding samples of the synthesized lignin-based epoxy vitrimers were labeled as E-P600 and E-P2000. FT-IR, TG, and 1 H NMR tests were measured to verify the successful epoxy modification of KL. Figure 2 shows the FT-IR spectra of KL, EML, and lignin-based epoxy vitrimer (E-P600), indicating that compared with KL, the benzene ring skeleton and the basic structure of EML have not been destroyed. To make a better comparison, the spectra were normalized based on the internal standard peak at 1512 cm −1 Figure 2b shows the TGA and DTG plots of KL and EML. The main weight loss of KL at 250 °C and 365 °C is attributed to the partial degradation of lignin and the removal of the methoxyl group from benzene ring, respectively [39]. For EML, the first peak temperature of DTG appears at 325 °C, which is mainly due to the decomposition of epoxy groups [40]. The second peak temperature of DTG at 385 °C is ascribed to the breakdown of the backbone of KL (Fig. S2). The 1 H-NMR spectrum of KL and EML are shown in Fig. S2. As shown in Fig. S2, protons associated with aromatic protons are observed between 6.5 and 7.5 ppm. The proton signal at around 3.75 ppm was assigned to the methoxy group of lignin [41]. The proton signals at 4.25 ppm (peak a), 2.83 ppm (peak b), and 2.67 / 3.17 ppm (peak c) are chemical shifts of protons on the epoxy group of EML [42]. These results confirmed the successful epoxy modification of lignin. The optical image of the contact angle test is shown in Fig. 3a. The contact angles of E-P600 and E-P2000 with water are 45.5° and 28.5°, respectively, indicating that the lignin-based epoxy vitrimers are hydrophilic materials. The hydrophilicity of the lignin-based epoxy vitrimers can be inferred from PEG-DCM in the raw material. The contact angle of E-P2000 is smaller than that of E-P600, which can be inferred from that E-P2000 has a higher content of PEG-DCM and a lower crosslinking density. Swelling tests of the samples were carried out, as Fig. 3c, to demonstrate their hydrophilic differences. As shown in the figure, the initial sizes of samples E-P600 and E-P2000 are both 1 cm × 1 cm. After 2 h, the sizes of E-P600 and E-P2000 increased to 1.1 cm × 1.1 cm and 1.5 cm × 1.5 cm, respectively, and 1.1 cm × 1.1 cm and 1.6 cm × 1.6 cm, respectively, after 24 h. Their weight changes are shown in Table 1, which can calculate that the swelling rates of E-P600 and E-P2000 in water are 179.29% and 414.61%, respectively. In addition, the Shore hardness tester is also used to test the relative hardness of the sample (Fig. 3b). The Shore hardness of the samples E-P600 and E-P2000 are 51 HA and 90 HA, respectively. The photothermal tests were carried out to evaluate the photothermal effects and stabilities of lignin-based epoxy vitrimers (Fig. 4). The rapid photothermal conversion of lignin-based epoxy vitrimers was detected by an infrared camera under infrared laser irradiation. However, the pure PEG-DCM exhibited near-infrared inertness and little change in its surface temperature, indicating that the conversion of infrared light energy to heat was caused by lignin. The heating and cooling curves of lignin-based epoxy vitrimers and PEG-DCM triggered by an infrared laser with a power density of 1.00 W/cm 2 are shown in Fig. 4b. All samples were heated for 5 min under an infrared laser and then cooled for 7 min with the laser turned off. The surface temperature of E-P600 and E-P2000 rose rapidly after the infrared laser was turned on, and slowed down and leveled off after 1.5 min, with the highest temperatures reaching 231 °C and 170 °C, respectively, compared with no significant changes in those of the pure PEG-DCM control group. It was calculated that the photothermal efficiency of E-P600 is 86.20% under light radiation at 1 W. The photothermal phenomenon of lignin-based epoxy vitrimers can be attributed to a large number of aromatic rings and conjugated functional groups in the molecular structure of EML, which allow the formation of strong conjugation and π-π molecular interaction between EML molecules [26,29]. The conjugated structure of EML effectively promotes electron transitions from low-energy orbitals to high-energy states [32], and the visible and near-infrared light energy absorbed by EML is mainly released in the form of non-radiative transitions. Compared with the E-P2000, the E-P600 has a faster rate of light-to-heat conversion and a higher maximum temperature. This phenomenon could be due to the stronger hydrogen bond interaction between EML molecules in E-P600 and the weaker conjugation effect of benzene rings by the longer molecular chain of E-P2000. The strong molecular interaction promotes the π-π aggregation of EML, thus promoting the photothermal transformation [43]. Moreover, E-P600 showed excellent photothermal stability during five cycles of light-to-heat and cooling Fig. 3 Optical images of E-P600 and E-P2000. a Contact angle test, b shore hardness, and c changes in sizes after swelling (Fig. 4c), suggesting its good practical application potential as a photothermal functional material. As with traditional vitrimers, lignin-based epoxy vitrimers can be repaired under a thermal stimulus, such as hot pressing, to activate the dynamic covalent bonds in the cross-linked network. However, the processes of the hot press were tedious and energy-consuming. Figure 5 shows that the remolding of E-P600 by an 11 kW hot press at 150 °C needed 2 h and consumed 22 kW·h of electric energy, which is far more than the heat required for the remolding. Based on the good effect and stability of photothermal conversion of lignin-based epoxy vitrimers, here, the in situ light-controlled remolding method was used to improve the convenience and reduce energy consumption (Fig. 4). Compared with the general hot-pressing, the in situ light-controlled method can remold the material without disassembling the damaged parts. The whole procedure only takes 0.02 kW·h of electric energy and 20 min. The mechanical properties of vitrimers predominantly determine their suitability for various applications [34]. All lignin-based epoxy vitrimer samples were tested by the uniaxial tensile. The stress-strain curves of the resultant lignin-based epoxy vitrimers with different molecular weights of PEG-DCM contents are shown in Fig. 6a. The introduction of flexible PEG segments rendered the lignin-based epoxy vitrimers with tunable mechanical properties by simply adjusting the molecular weight of PEG-DCM in the reaction mixture [44]. When the molecular weight of PEG-DCM rose from 600 to 2000, the tensile strength increased from 1.45 MPa to 4.52 MPa, and the elongation at break decreased from 22.1 to 7.4%. Due to the entanglement between the longer molecular chains of PEG-DCM2000, the tensile strength of lignin-based epoxy vitrimers increased with the increasing molecular weight of PEG-DCM. At the same time, the decrease in the tensile strength of the lignin-based epoxy vitrimers was attributed to the increase in the glass transition temperature of the E-P2000 (Fig. S3). According to the in situ light-controlled remolding method, the samples were irradiated for 20 min under an 808-nm infrared laser with a power of 1 W/cm 2 . The stress-strain curves of the reprocessed lignin-based epoxy vitrimers are shown in Fig. 6a. Healing efficiency (%) is defined as the ratio of the stress of healed and virgin vitrimers. The results show that the efficient dynamic transesterification reaction in the cross-linking network structure under external thermal stimulation endows the E-P600 with a healing efficiency of 46.2%. However, with the increase of PEG-DCM molecular weight, the healing efficiency of lignin-based epoxy vitrimers decreased, as the healing efficiency of E-P2000 was only 26.3%. This might be due to the relatively low efficiency of ester exchange reaction within a limited time, and the longer chains of PEG-DCM which impeded the movement of polymer chains could also weaken the rearrangement efficiency of chemical cross-linking networks [45]. The stress-strain curves of the repaired E-P600 sample by the hot-pressing method is shown in Fig. S4. Its tensile strength reached 1.22 MPa, and its healing efficiency achieved 84.1%. Compared with the general hot-pressing repair method, the in situ light-control repair method achieved 55% of its effect with only 0.09% energy consumption, whose energy utilization coefficient increased nearly 600 times. The TGA/DTA results in Fig. 6b show that the ligninbased epoxy vitrimers has good thermal stabilities. The initial decomposition temperature at 5% weight loss and the maximum decomposition rate temperature of E-P600 reached 280 °C and 374 °C, and E-P2000 reached 315 °C and 392 °C, respectively. Compared with that before curing, the thermal stability of EML cured with PEG-DCM has been greatly improved, which also meant that EML and PEG-DCM react adequately. During the experiment, we noted that the mixture of uncured EML and PEG-DCM had a high viscosity. The E-P600 was used as an adhesive. As shown in Fig. S5, the E-P600 adhesive can withstand direct pull force exceeding 9.0 N. The adhesive mechanism of lignin was that there were phenolic hydroxyl groups, methoxy groups, and free C5 existing on the benzene ring, which could be further cross-linked [46]. Similar to the mechanism, the experiment was carried out under acidic conditions, and the hydrogen atom in the ortho position of phenol was more active [47]. Meanwhile, the adhesiveness of the mixture was enhanced by the introduction of epoxy groups and hydroxyl groups formed during the curing process. Conclusion In this work, we have reported unprecedented but low-cost lignin-based epoxy vitrimers, with good photothermal conversion properties and light-controlled remoldability. The lignin-based epoxy vitrimer was fabricated by blending epoxy modified lignin (EML) and poly(ethylene glycol) bis(carboxymethyl) ether (PEG-DCM). The obtained ligninbased epoxy vitrimers showed good light-to-heat capability, which was attributed to the fact that the conjugated structures of lignin could effectively reduce the energy required for the electronic transition from low-energy orbitals to high-energy states and then release the energy mainly in the form of non-radiative transitions. Applying this property, the lignin-based epoxy vitrimers can be remolded by an in situ light-controlled method. Under the infrared radiation (808 nm, 1 W cm −2 ) for only 30 s, the surface temperature of EML/PEG-DCM was over ∼148 °C and reached the maximum at ∼231 °C for 5 min. More importantly, the whole procedure of light-controlled remolding only took 0.02 kW·h of electric energy and 20 min. We believe that this work provides a new idea for the application of lignin in photothermal materials and the research of low-power light-controlled remolding materials. Conflict of interest The authors declare no competing interests.
5,084.8
2023-02-01T00:00:00.000
[ "Materials Science", "Environmental Science" ]
Ultra-short pulse laser acceleration of protons to 80 MeV from cryogenic hydrogen jets tailored to near-critical density Laser plasma-based particle accelerators attract great interest in fields where conventional accelerators reach limits based on size, cost or beam parameters. Despite the fact that particle in cell simulations have predicted several advantageous ion acceleration schemes, laser accelerators have not yet reached their full potential in producing simultaneous high-radiation doses at high particle energies. The most stringent limitation is the lack of a suitable high-repetition rate target that also provides a high degree of control of the plasma conditions required to access these advanced regimes. Here, we demonstrate that the interaction of petawatt-class laser pulses with a pre-formed micrometer-sized cryogenic hydrogen jet plasma overcomes these limitations enabling tailored density scans from the solid to the underdense regime. Our proof-of-concept experiment demonstrates that the near-critical plasma density profile produces proton energies of up to 80 MeV. Based on hydrodynamic and three-dimensional particle in cell simulations, transition between different acceleration schemes are shown, suggesting enhanced proton acceleration at the relativistic transparency front for the optimal case. Supplementary Fig. 1. Proton profile measurements using the depth dose detector. Horizontal cross-sections of the angular proton depth dose profile recorded using an on-shot scintillator based depth dose detector (DDD) along the laser propagation direction. a is plotted for an expanded target (d = 15 µm) yielding a forward peaked proton distribution as indicated by the white contour line and compared to the unexpanded case b featuring only a slight profile curvature. Note that the recorded signal strength is ten times smaller in the unexpanded case. The change in the proton emission distribution inferred from the TPS measurements in Fig. 3 of the main text is qualitatively supported by the recordings of the on-shot depth dose detector (DDD). This plastic scintillator-based detector records the angularly resolved proton depth dose profile within the horizontal plane behind the target. It covered an opening angle of about ±7 • with respect to the laser propagation direction and an additional energy filter in front of the 2 mm thick scintillator plate set the low energy detection cut-off to 24 MeV. Note that the scintillator-based detector is also sensitive to high-energy electrons and x-rays from the high-intensity laser plasma interaction, although their absorption is smaller than that of protons. While this background complicates the detection of maximum proton energies, we found that the dependence of the depth dose distribution on the emission angle can be well distinguished. DDD recordings of a shot on an expanded (d = 15 µm) and an unexpanded hydrogen jet (d = 5 µm) are shown in Supplementary Fig. 1a and b, respectively. While a slight profile curvature is found for the unexpanded case, the depth dose profile obtained for the pre-expanded configuration shows a clear dependence on the emission angle as indicated by the white contour line. This agrees well with the shift of the proton emission direction towards the laser forward direction for increased shadow diameters as inferred from the TPS measurements. SUPPLEMENTARY DISCUSSION: PIC SIMULATIONS WITH VARYING OFFSET OF THE LASER WITH RESPECT TO THE TARGET AXIS A series of PIC simulations was performed to study the influence of non-central hits for the three target expansion states discussed in the main text. For various lateral target position offsets between 0 µm and 5 µm, the proton energy spectra in the xy-plane as a function of the emission angle are shown in Supplementary Fig. 2, 3 and 4. For the unexpanded target configuration in Supplementary Fig. 2, the maximum proton energies E max are almost constant and independent of the emission angle for small offsets. This is due to the mostly isotropic proton emission perpendicular to the jet axis resulting from TNSA. Starting from position offsets of 1.5 µm, the most energetic protons are deflected toward progressively increasing angles reducing E max in the laser forward direction. We now consider the best 1/3 of the laser shots (as in Fig. 3a in the main text) in order to compare E max in laser forward direction from the simulations with the observed distribution of the proton energies in the experiment. For the measured target position jitter of 5 µm, these shots are expected to have a maximum offset of about 2 µm. We observe maximum proton energy fluctuations in laser forward direction between 20 MeV and 38 MeV in the experiment that are in good agreement with the 26 MeV (2 µm offset) to 35 MeV (0 µm offset) seen in the simulation. This demonstrate that the target position offset is the dominant contribution to the energy fluctuations in the experiment. Similar good agreement between simulation and measurement is obtained for largely expanded targets. In the simulations for a target diameter of 33.5 µm (see Supplementary Fig. 3 the energies of the isotropically emitted component remain constant and leads to about 10 MeV even at the highest simulated offset of 5 µm. Given the limited amount of shots in the experiment with largely expanded targets (9 shots in total for d> 30 µm), the distribution of the observed maximum proton energies between 10 MeV and 20 MeV agrees well with the simulation results. Lastly, simulation runs with non-central hits for the optimal target expansion are displayed in Supplementary Fig. 4. Here, the emission direction and the energy of the most energetic particles sensitively varies with the exact target position. As such, the energies in laser forward direction are reduced from 110 MeV at 0 µm offset to about 30 MeV at 2 µm offset. This large fluctuations in E max reproduces the observation in the experiment where for the best 1/3 of the shots E max was measured between 20 MeV and 80 MeV (see Fig. 3a in the main text). In summary, the simulations with non-central hits yield impressive match with the proton energy fluctuations observed in the experiments and show that these fluctuations are dominated by the spatial laser target overlap for all density regimes. Supplementary Fig. 6. Proton acceleration with intrinsic laser contrast conditions. a Temporal intensity profile of the DRACO PW laser pulse without using the plasma mirror device. As a result of these intrinsic laser contrast conditions, the hydrogen jet has already expanded to a shadow diameter of about 15 µm at the time of peak intensity (see shadowgraphy image in a). The onset time at which the pre-expansion of the target sets in is a few tens of picoseconds before the arrival of the peak derived from the threshold intensity of dielectric breakdown [1]. The target density profile generated by the over tens of picoseconds accumulated laser light is expected to differ from the density distributions discussed in the main text and cannot be controlled independently as with the short-pulse pre-pulse or assessed by our probing technique (being sensitive only to the plasma scale length and not to longitudinal modulations of the bulk density). The distribution of the maximum proton energies (for a total of 55 shots with 23 J on target) in the laser forward direction using intrinsic laser contrast conditions is displayed in b. The large shot-to-shot fluctuation is again dominated by the varying spatial overlap of the laser focus spot and the hydrogen jet. While in the majority of the shots energies below 35 MeV are measured, three shots (labeled with (1)-(3)) with higher maximum proton energies are observed, the highest at almost 80 MeV. The measured proton energy spectra for these three shots are shown in c. Non-exponential proton spectra in two of the three shots are an additional indication for the target density profiles to differ from the short-pulse pre-pulse case. The single shot performance demonstrates that energies of 80 MeV are achievable without the use of the plasma mirror representing otherwise the main limitation of real repetition rate operation. b a Supplementary Fig. 7. Maximum proton energies in relation to the spatial overlap of laser focus and target position. a extends the Fig. 3a of the main text, showing the entire scan (350 shots in total) of maximum proton energies measured in laser propagation direction as a function of the shadow diameter d. Different colors indicate different transmission cut-offs. The highest proton energies are measured for the 33% of shots with the least amount of light bypassing the target (blue dots as in Fig. 3a of the main text). Black and yellow dots represent each one-third of the shots with the largest and intermediate transmission, respectively. In particular for the black dots, the maximum proton energies are lower due to the insufficient spatial overlap of the jet and laser focus position. Best performing shots within 4 µm bins of the shadow diameter are furthermore highlighted by the orange circles. The comparatively low number of shots at the highest energy and their clear distinction from the overall distribution is due to the simultaneous requirement for perfect laser target overlap, ideal on-shot laser parameters and emission of the fastest protons in the detection direction of the TPS. The dominance of the lateral target position jitter as the main source of fluctuation in the maximum proton energies is shown for a subset (one day of experiments) with a shadow diameter of 5 µm in b. With increasing lateral distance from the central position, the proton energy (blue dots, left y-axis) is decreasing and the fraction of transmitted light (red dots, right y-axis) is increasing, which is indicated by the dotted lines as an guide for the eye.
2,207.8
2023-07-07T00:00:00.000
[ "Physics", "Engineering" ]
The Role of Dendritic Cells in Bone Loss and Repair The Role of Dendritic Cells in Bone Loss and Repair The cells of innate immunity, such as neutrophils, macrophages, and dendritic cells (DCs), stuck to the bone implant walls release reactive radicals, enzymes, and chemokines, which induced subsequent bone loss. DCs do not play a big role in bone homeosta- sis in steady-state conditions, but could act as osteoclasts precursors in inflammation foci of bone. The potent antigen-presenting cells responsible for activation of native T cells and modulation of T cell activity through RANK/RANKL pathway and other cytokines associated with osteoclastogenesis determine critically situated at the osteoimmune inter- face. The titanium (Ti) and magnesium (Mg), the metallic candidate in implant, including calcium-phosphate coating formation on them by method plasma electrolytic oxidation were used to evaluate the immune-modulatory effects of DCs. The calcium-phosphate coating on metals induced mature DC (mDC) phenotype, while Ti and Mg promoted a noninflammatory environment by supporting an immature DC (iDC) phenotype based on surface marker expression, cytokine production profiles, and cell morphology. These findings have numerous therapeutic implications in addition to DC’s important role in regulating innate and adaptive immunity. A direct contribution of these cells to inflam - mation-induced bone loss establishes DC as a promising therapeutic target, not only for controlling inflammation but also for modulating bone destruction. or Mg 2 surfaces suggested that kind of metal substrate is not crucial in modulating DC phenotype. The calcium phosphate coated surfaces have the same roughness, were prepared to retain their high surface energy by plasma electrolytic oxidation and were treated in the unipolar PEO-mode. In this study, different of surfaces metal implants and coated were shown to induce differen tial DC phenotype upon treatment. DCs treated with calcium phosphate surfaces exhibited a more mature phenotype, whereas DCs treated with Ti 1 and Mg 1 surfaces maintained an immature phenotype. These results indicate another benefit of metal surfaces for pro moting bone formation and integration by providing a local noninflammatory environ ment. Furthermore, calcium phosphate surfaces indicated possible material property–DC phenotype relationships for implant design. There is mounting evidence to suggest the involvement of the immune system by means of activation by metal ions released via biocor-rosion, in the pathophysiologic mechanisms of aseptic loosening of orthopedic implants. Introduction There are two forms of the immune response of organism: innate and adaptive ones and dendritic cells (DC) serve as a bridge between them. The role of innate immunity cells DC-derived-OC according to phenotypic and functional characterization studies. Moreover, DCs modulate T cell activity through RANK/RANKL and osteoclastogenesis-associated cytokines [12][13][14]. The role of DC, as the key components of the defensive response of the organism in the pathology of bone, was demonstrated in the field of osteoimmunology research. It is indicated, that normally the localization of the DC in the stroma proper or adjacent to the bone tissue, is rare and DC do not take part in the restoration of its defects [15,16]. On the other hand, the presence of DC in patient's synovial periodontal fluid during periodontitis and in joints of patients with rheumatoid arthritis has been documented [17,18]. With these diseases, localized in the bone stroma the DC can form aggregates with T-cells, forming inflammatory foci, where migrate through chemotaxis and adhesion molecules RANK-RANKL. It is shown that, during the inflammation of bone tissues, the expression of these receptors on the surface of the DC induces indirectly through regulation of T-cell activity and through the process of differentiation and survival of osteoclasts bone degradation [15,16,19]. Rivollier et al. have shown that myeloid DCs of human peripheral blood can be transformed into osteoclasts in the presence of macrophage colony-stimulating factor M-CSF and the soluble form of receptor RANKL, suggesting direct participation of DC in osteoclastogenesis [16,20,21]. Further, co-cultivation of CD11c + , CD11b − DC, similar to the classical precursors of osteoclasts, under the influence of granulocyte-macrophage colony-stimulating factor GM-CSF and interleukin 4 IL-4, their transformation into osteoclasts was demonstrated. This suggests that transformed into functional osteoclasts CD11c + DC, under the condition of their immune interaction with CD4 + T cells and other factors in the surrounding bone tissue environment, can induce the bone resorption process in vivo. Also installed an important protein currently considered as a master regulator of osteoclastogenesis-dendritic cell-specific transmembrane protein (DC-STAMP). It is assumed that DC-STAMP plays an imperative role in bone homeostasis by regulating the differentiation of both osteoclasts and osteoblasts [22]. In general, these data point to a critical effect of DC on the process of osteoclastogenesis in inflammatory bone diseases, where they act not only as powerful antigen-presenting cells, that activate and regulate the cells of immune system, but also influence directly to the destruction of bone tissue. There is a lack of definitive evidence about the physiological relevance of this phenomenon in vivo but DCs could act as an osteoimmune interface, contributing to bone loss in inflammatory diseases [12,16,21]. At the moment, in the field of endoprosthetics, there is a tendency in studies aimed at creating biomaterials that can replace damaged tissue sites of the human organism. Most successfully, these studies are made while treatment of the pathology of the musculoskeletal system, including in the endoprosthetics of large joints. At the moment, stainless steel and titanium alloys are the main materials used for the manufacture of immersion implants. Nevertheless, the use of fixatives from bioinert metals in osteosynthesis requires repeated surgical interventions aimed at removing the metal implants that have performed their role, and this is often no less traumatic, than osteosynthesis itself. Therefore, it remains relevant to search for bioresorbable materials that are suitable for creating implants used in osteosynthesis, that could be completely metabolized by the organism without exerting a pathological effect on surrounding tissues and the organism as a whole [23,24]. Such materials include magnesium alloys, which, due to the strength properties, are suitable for the production of various types of implants. This material has good biocompatibility, sufficient corrosion resistance and shows a positive effect of magnesium biodegradation products on osteogenesis, but the mechanism of their action is not fully studied [25]. Both bioinert and bioresorbable materials, when introduced into the organism, are contacted with antigen-presenting cells and their properties, such as topography of the surface, chemical composition, play an important role in initiating a pro-or anti-inflammatory immune response. Thus, DCs are suitable cells for evaluating of their response to biomaterials because they can transform into osteoclasts under bone inflammation and also initiate and modulate the immune response to the implants materials. This way, the determination of the ability of biomaterial to influence on the phenotype of DC is quite applicable for determining their compatibility properties with the organism. Only several metal-based nanoparticles were reported to activate T cell responses or homeostasis. For example, TiO 2 nanoparticles provoke inflammatory cytokines and increase DC maturation, expression of co-stimulatory molecules, and prime native T cell activation and proliferation [26]. Most importantly, pattern recognition receptors signaling activations also can enhance antigen presentation via upregulating the expression of MHC and co-stimulatory molecules (CD80 and CD86) on DC leading to adaptive immunity activations [1,7]. Thus, the study of the phenotype and functional activity of the DC after exposure to biomaterials corrected for properties suggests a direction in the development of the immune response induced by their introduction into the organism and makes it possible to compose an immunomodulating design of such biomaterial. From all the listed above, the aim of the work is to reveal the immunomodulating properties of bioinert (titanium) and bioresorbable (magnesium) metal implants according to the degree of their influence on DC markers. Materials Bacterial lipopolysaccharide (LPS, Abcam, USA) and the disks of implants were prepared from 1-mm thick sheets of commercially pure Ti (wt. %: Fe 0.25; Si 0.12; С 0.07; О 0.12; N 0.04; Н 0.01, Ti-the remaining part balance) and Mg alloy MA8 (1.5-2.5 wt. % Mn; 0.15-0.35 wt. % Ce; Mg-balance) were used. The samples of a size of 15 mm × 20 mm × 2 mm have been undergone preliminary mechanical treatment until the roughness parameter of Ra = 0.12 μm. After mechanical treatment, samples were thoroughly washed with deionized water and ethanol and dried in the airflow. The samples appearance after volumetric tests was observed using a Stemi 2000CS stereo-microscope (Zeiss, Germany). The electrolyte was prepared in 2 liters of deionized water by adding the following components: 30 g/l of calcium glycerophosphate dihydrate (C 3 H 7 O 6 P), Ca·2H 2 O and 40 g/l of calcium acetate monohydrate (Ca(CH 3 COOO) 2 ·H 2 O). The electrolyte pH was adjusted to 10.9-11.3 by adding 20% NaOH solution [27]. Plasma electrolytic oxidation was carried out using a reversible thyristor rectifier, as power supply, equipped with an automated control system with appropriate software. All the samples were treated in the unipolar PEO-mode at a current density of 0.67 A/cm 2 . The treatment time was 300 s and the final voltage equaled to 540 V. Experimental series were carried out with the sample coatings, which included calcium and phosphorus (Са and Р) on Ti and Mg alloy substrate. The samples were denoted in the text as: uncoated titanium-Ti 1; titanium with calcium-phosphate coating-Ti 2, and uncoated Mg alloy-Mg 1; with coating-Mg 1. The samples were punched to be 15 mm in diameter for snug fit in the wells of 6-well tissue culture polystyrene (TCPS) plates (Thermo Scientific, Germany). The samples were sterilized in a laboratory oven (Thermo Scientific, Denmark) at 180°C for 15 min (with controlling of surface properties), in accordance with the rules for sterilization of medical devices. Animals Study approval from the local Ethical Committee of the Pacific State Medical University (Vladivostok, Russia) was received under No. 2015-0102. For the experiments, adult, threemonth old, 250 g of weight male rat was used. Animals were euthanized using carbon dioxide asphyxiation as approved by the MIT committee on animal care (National Institute of Health Guide for the Care and Use of Laboratory Animals, NIH Publications No. 80023, 1996). Dendritic cell (DC) culture The two primary cell types were used in this study are human peripheral blood mononuclear cell (PBMC) and rat bone marrow derived DCs (RMDC). Human PBMC were obtained from donor blood (Border station of blood transfusion, Primorye, Vladivostok, RU). All donors were in good health and were negative for blood-borne pathogens as detected by standard blood bank assays. The aphaeresis product was processed to enrich the PBMC fraction by using ficoll-hypaque (BioLegend, CA, USA) density gradient separation according to standard protocols as previously described [28]. RMDC were generated, as previously described, by Onai et al. [29]. Briefly, BM cells were removed from a male of rats and cultured in 24-well-culture plates, at a concentration of 5 × 10 6 cells per well, in 800 μl of RPMI-1640 (Lonza, Belgium) supplemented with heat-inactivated 10% fetal calf serum (FCS), 100 μg/ ml of penicillin, 100 μg/ml of streptomycin, 5 × 10 −5 m 2-mercaptoethanol (Lonza, Belgium) plus GM-CSF (50 ng/ml) and IL-4 (10 ng/ml). On days 3, 6, and 9 the supernatant was gently removed and replaced with the same volume of the supplemented medium. On day 9 of culture, ≈ 80% of the cells were CD11c + DC. Cell viability assay To determine toxicity levels of samples the cellular cultures RMDC were prepared at approximately 2000 and 20,000 cells per well, respectively, in 96-well flat bottom tissue culture plates. A mitochondrial colorimetric assay (MTT assay) by the percent of total succinate dehydrogenase (SDH) released was used [30]. In the each well with cellular monolayer, leaving 200 μL, to which 40 μL of MTT 1.2 mM solution (3-(4.5-dimethylthiazol 2-yl)-2.5-diphenyltetrazoliumbromide, Sigma-Aldrich, USA) was added. The cells were incubated at 37°C and 5% CO 2 for 4 h. The upper medium was removed carefully, and the intracellular formazan was solubilized by adding 200 μL of dimethyl sulfoxide to each well (Sigma-Aldrich, USA). Then, the contents of the wells were mixed thoroughly using a pipette. Two hundred microliters from each well were transferred into a separate well on a 96-well ELISA plate (Corning Costar, Lowell, MA, USA). The absorbance was measured at 570 nm. The results expressed as optical density (OD) were obtained for three different experiments from each surface modification. The mean fluorescence intensity (MFI) value for the expressing CD14, CD34, CD83 on RMDC (polyclonal antibody with species reactivity human, mouse, rat, 1:200, MyBioSource, Inc., USA) was analyzed using a confocal scanning laser microscope (Zeiss, Germany) connected to an Evolution MP Color Camera (Media Cybernetics Inc., Bethesda, MD, USA). The camera used Image-Pro Plus 7.0 software (Media Cybernetics Inc.), and the acquired digital images were processed using ImageJ software (National Institutes of Health, Bethesda, MD, USA) for qualitative analysis. Methods of determination of the cells functional activity RMDC incubated with samples of implants at 37°C and separate supernatant were frozen and stored at −20°C. The disrupted cells were mixed with 100 μL of Griess reagent, which consisted of equal volumes of 0.1% N (1 naphthyl) ethylenediamine dihydrochloride and 1% n-aminobenzene sulfanilamide (ICN, USA) in 2.5% phosphoric acid solution. After incubation for 10 min, the absorbances were measured at 540 nm using a Multiskan Titertek Plus spectrophotometer (Flow lab, Finland). Determination of the ATPase was determined by adding to the cellular monolayer 20 μl of substrate for ATPase (8 mg ATP on 1 ml of Tris HCl buffer (pH 7.8), which contained 87 mg of NaCl, 28.7 mg of KCl, and 52 mg of MgCl 2 6 H 2 O, ICN, USA) and the samples were left for 30 and 60 min. The reaction was stopped by adding 100 μl mixture of ascorbic and molybdenum acids at a ratio of 1:1. After 20 min, the optical density of the substrates was measured on a spectrophotometer at a wavelength of 620 nm. For the determination of the activity of lactate dehydrogenases (LDH), the Lloyd method in his own modification was used. 100 μl of substrate was added into wells of plate with adherent cells (2 mg/ml iodine nitro tetrazolium on phosphate buffer pH 7.2 with 0.4% MnCl 2 , Sigma-Aldrich, USA) and incubated at 37°C for 30 min. Diformazan pellets were dissolved by adding 100 μl of isopropyl alcohol and acidified with 0.04 M HCl for 20 min. The optical density of the substrates was determined on a spectrophotometer at a wavelength of 650 nm. The activity of cytochrome oxidase was determined by adding to the cellular monolayer 100 μl of 0.1 M acetate buffer, pH 5.5, containing 10 mg/ml of MnCl 2 , 0.33% hydrogen peroxide, and 2 mg/ml of diaminobenzidine. After 10 min incubation at room temperature, the reaction was stopped by the addition of 10% sulfuric acid (100 μl per well). The quantity of formed product was determined by measuring the absorption at 492 nm. Samples containing the substrate solutions and 10% sulfuric acid were used as the control. The results were obtained for three different experiments from each surface modification. The spectrophotometric data of optical density were evaluated as the simulation index (T), which was calculated as the ratio of the difference between the mean values of the optical density of the solutions containing reaction products of control and experimental cells, versus the mean value of optical density of intact cells (and expressed as per cents). Cytokine assays Measurements of RMDC cytokines RANTES, TNF α, IL-1, IL-6, IL-10 and IL-12 in the supernatants were performed by using specific solid-phase sandwich enzyme-linked immunosorbent assay (ELISA). Capture and detection cytokines used were purchased from Mouse ELISA Kit (Abcam, USA), using the procedure recommended by the manufacturer. The absorbances were measured at 450 nm by use of a microplate ELISA reader. Scanning electron microscopy (SEM) of adherent RMDC The qualitative analysis of cell adhesion was determined at 1, 3, 6, and 9 days. The disks were washed three times with warm D-PBS to remove the non-or loosely adherent cells. The cells samples were fixed with 1 ml 0.2 M cacodylate buffer (рН = 7.4) included 2% glutaraldehyde, 3% paraformaldehyde and 0.02% (w/v) picric acid (Sigma) overnight. The cells were washed three times with 0.2 M cacodylate buffer and were post-fixed with 0.5 ml 1% osmium tetroxide (OsO 4 , Sigma) for 1 h. The cells were then dehydrated in a sequential series of increasing concentrations of acetone: 15, 30, 45, 75, 90, and 100% acetone for 30 min at each concentration. Subsequently, the samples were dried in an E3000 Critical Point Dryer (Quorum Technologies, Canada) and sputter coated with a thin layer (~5 nm) of carbon (JEE-420, JEOL, Japan). The micrographs were collected using scanning electron microscopy ULTRA PLUS-40-50 (Zeiss, Germany) in accelerating voltage 5 kV. Statistical analysis Data for the differentiation assay were analyzed by analysis of variance (ANOVA) and the Mann-Whitney method for comparisons between groups. The levels of cytokines and cellular proliferation, as well as the fluorescence intensity of the mature DC, were also analyzed by "ANOVA" followed by the Newman-Keuls test to determine multiple comparisons. Values were considered significant when different at P < 0.05. The morphology and activity of dendritic cells Innate immunity is nonspecific and the first line of organism defense is carried out with the help of pattern-recognition receptors, which plays a significant role in the early reaction and the subsequent pro-inflammatory response. The physical and chemical properties of the implants initiate various cellular reactions, such as absorption and intracellular biodistribution, which lead to a certain form of immune response [10,31]. SEM micrographs showed that RMDC after contact with Ti 1 and Mg 1 had a spherical shape with an estimated size of 20 μm in diameter (Figure 1B, E). DCs treated with calcium phosphate coated exhibited more dendritic processes associated with mature DCs, the cells were larger with a folded surface; the formation of numerous dendritic appendages was also observed (Figure 1D, F). While in contact with Mg 1, the diameter of the cells was within the limits, the architectonics of the surface had folding, which indicated their activation ( Figure 1E). In contact with coating Mg 2 the morphology of DC changes were similar with the morphology of cells contacted with Ti 2 (Figure 1F). Cell viability It is known that the MTT assay provides an estimate of total enzyme succinate dehydrogenase (SDH) released. SDH is a flavoprotein dehydrogenase and belongs to the succinate oxidase enzyme complex that forms the membrane respiratory chain of mitochondrion. The flavin group of this enzyme contains four iron atoms and is bound covalently to the protein [32]. We found no decrease of SDH activity of RMDC in the initial observation period (2 days) in DC contacting with Ti 1, Ti 2 and Mg 1, which indicates to a lack of cytotoxic effect of the studied samples (Figure 2A). A significant decrease of intracellular content after 3 days in cells in contact with Mg 1 and Mg 2 indicated a stimulated effect of these samples. It should also be noted that the dynamics of the cellular response on the samples with coatings was similar despite the different materials. Enzymes activity of cells Nitrite levels, an indirect measure of nitric oxide production, were assessed by the Griess Assay. Indicators for RMDC stimulated by Ti 1 was 14.5 uM, which was below than that stimulated by Ti 2, Mg 1 and Mg 2 (22.8; 21.9 and 19.03 uM), but the difference was not significant (P = 0.07). The level of cellular membrane released enzymes (ATPase) was significantly lower in the titanium Ti 1 and Mg 1 (T = −3.75 ± 0.5 and T = −6.25 ± 0.6%, respectively, 3 days contact) than in the coated samples (P = 0.024, Figure 2B). ATPase of membrane participates in the hydrolysis of phosphate bonds and is indicators of stimulation of the cellular metabolism with a decrease in intracellular content. These data shows maximum stimulation of cells associated with the adhesion on samples surface within the first days and a difference (p < 0.05) of indices for coated on the titanium and magnesium (Ti 2, Mg 2) with less stimulating effects as compared with pure metals (Ti 1, Mg 1). The LDH is a coenzyme dependent dehydrogenase and catalyzes the transfer of a reduced equivalent (hydrogen) from lactate to NAD + or from NADPH to pyruvate. LDH acts on the last step of hydrolysis that occurs under anaerobic conditions and results in the reduction of pyruvate yielding lactate and NAD + . Most of the enzyme in the cell is weakly bound to the cell structure and localized in the cytoplasm, a smaller part being attached firmly to mitochondrial membranes [33]. We found no decrease LDH activity in the observation period, which indicates to a lack of cytotoxic effect of the studied samples ( Figure 4C). A significant increase of intracellular content of this period reflected the increase of metabolic activity cells after contact with the samples at higher index stimulation for cells with Mg 1 (p = 0.035). Cytochrome oxidase and SDH are the main components of the normal aerobic oxidative system of the tissue cells that are also known as the succinate dehydrogenase complex, where SDH is the first component and cytochrome oxidase is the second. Cytochromes are subdivided into three groups according to their chemical structure and spectrum: cytochromes a, b, and c. Oxidized cytochrome oxidase is reduced by cytochrome c catalyzing the transfer of four electrons to the oxygen molecule. Thus, cytochrome oxidase is a representative of the third group of oxidases. The difference was observed between indicators of enzymes depending on the sample type: the highest one was detected in cells contacted with Ti 2 and Mg 2 (2 and 3 days, Figure 2D). Thereafter, these parameters de-creased, thus showing the cells stabilization. Such a change in cell metabolism was associated with the components contained in coatings on the samples. Effect of samples on DC maturation Influenced of various stimulus DCs are undergoing a process of maturation that allows them to become more potent inducers of the adaptive arm of the immune response. In the absence of stimuli, the vast majority of DCs are immature. It is unclear whether or not treatment with metals implants can trigger DC maturation, but the ability of calcium phosphate PEO-coating to induce ROS in phagocytes suggests the possibility that these materials might activate this potent antigen-presenting cell (APC) population. DC maturation can be connected to the increased expression of the activation markers, CD1a and CD83, similar to what is observed in vivo [10,11]. The RMDC treated with RPMI 1640 showed the typical expression of cell surface molecules as immature DC. In contrast the RMDC treated with PEO coated showed a clear change of the expression levels of CD14, CD34, and CD83 (Figure 3). The CD34 + cells in bone marrow are precursors of both the DC and granulocytes, and such cells are of the "intermediate" type on the 6th day of culture under the influence of an inducer are able to differentiate in DC or in leukocytes. In order to study the role of implants as maturation inducers, the receptor phenotype of human PBMC was analyzed. The primary culture of human PBMC was placed in vials with samples and cultured in the presence of GMCS and IL-4. As a control, cells adhered to the surface of specialized plastic coated with lectin were used, and lipopolysaccharide Escherichia coli (LPS) was added to obtain a population of mature DCs. We already know that culturing of DC in the presence of GM-CSF and IL-4 supplemented with 2.5 ng/ml of LPS stimulates maturation of DC and reduces the number of macrophages in culture. It was determined that the maximum expression of CD34 on the DK surface was observed on the 1st day of joint incubation with LPS, and the cell content was 72 ± 5.8%. Later, their number decreased, reaching the minimum figures by the end of the observation period (1.6 ± 0.08%). Under the influence of implants, the number of CD34 + cells compared to the control was lower. This way for samples with titanium after 1 day the index was 56 ± 4.8% and for samples with magnesium 48 ± 4.6%. The minimum number of these cells was noted at the end of the observation period (21 s) and amounted to 1.8 ± 0.2% and 2.4 ± 0.6%, respectively. Thus, the data obtained by us indicate an identical effect of the implants on the expression of the adhesion receptor, and the percentage of the content of these cells, reduced relative to the control, on their expressed effect as inducers of cell maturation. As an indicator reflecting the direction of hematopoietic pool cell differentiation under the influence of implants, the degree of expression of membrane glycosylphosphatidylinositolbound CD14 protein was determined. This component is an element of the CD14/TLR4/MD2 receptor complex, which recognizes the LPS, and is expressed on the surface of myeloid cells, especially on macrophages. We found that, in the control sample under the conditions indicated above, the minimum number of cells with a high degree of CD14 expression was determined on the ninth day, while the content of cells positive for detecting the DC CD83 terminal differentiation marker at these times the observation was maximum. The indicators were 14 ± 1.8 and 67 ± 5.8%, respectively. These data indicate that the introduction of LPS has a pronounced effect on the maturation of the DC population. During studying the degree of expression of these receptors in cell populations that contacted the implants, it was found that magnesium had the most activating effect on differentiation in the direction of the DC. Thus, on the 9th day of a joint incubation with magnesium, the CD14 + cell count was 26 ± 2.8% and CD83 + 58 ± 4.6%, while on contact with titanium 32 ± 3.1 and 48 ± 3.6%. In subsequent observation periods, the number of CD14 + cells was at the specified level, and when in contact with the implants, it decreased slightly. The data cited indicate a pronounced effect of the magnesium implant on the directionality of differentiation of hematopoietic pool cells mainly toward the DC. The great interest is in the data obtained by us on the degree of expression of costimulatory molecules CD83 and CD86 on the surface of the DC during their interaction with implants, depending on the incubation time. These receptors of intercellular adhesion interact with the corresponding ligands with high avidity, under condition of their expression on cell membranes by clusters. Under effect of LPS on the degrees of expression receptors on day 9, the indicator of CD14 + DC was minimal against the maximum of CD83 + (2.4% ± 0.2 and 62.4 ± 0.6%, Figure 4C). Despite the fact that, at the initial time of observation, LPS activated maturation of DC more than implants, over time the level of expression of costimulatory molecules on the cell surface under the influence of titanium and magnesium increased ( Figure 4C). Moreover, with relatively low values of CD83 + and CD86 + DC, the intensity of their luminescence increased (Figure 4) and expression of CD83 molecules on DC, incubated with implants remained elevated (23.8 ± 2.6 and 21.2 ± 3.4%, respectively) in comparison with DC treated with LPS (0.6 ± 0.2%) until the end of the observation period. In contact with titanium Ti 1, the number of CD14 + cells in this period was 32 ± 2.1% and CD83 + − 48 ± 2.6%, with coated titanium Ti 2-26.4 ± 2.1% and 52.6 ± 4.6% (Figure 4). These data indicate to the effect of coated on titanium as inductors of DC differentiation. The leukocyte antigen CD38 is a bifunctional enzyme that combines ADP-ribosyl cyclase and cADP-ribosyl hydrolase activity, is expressed on hematopoietic cells, respectively, by their degree of differentiation or proliferation. The product of the enzymatic activity of CD38-cyclic ADP-ribose is a universal catalyst of calcium from the internal depot. Moreover, the main function of the CD38 receptor is to regulate the activity of bone marrow cells, lymphoid tissue and peripheral blood, stimulating their production of cytokines, and also participates in the migration of the DC. Increased regulation of CD38 serves as a marker of cell activation, in particular, the process of differentiation of B-lymphocytes into plasmocytes. When studying the amount of CD38 + phenotype in a pool of undifferentiated cells of the myeloid series before contact with samples and LPS, their content was determined to be 19.29 ± 1.74%. In the dynamics of interaction of these cells with LPS, the amount of CD38 + increased, with a maximum value of 3 seconds after incubation, the indicator was 98.2 ± 9.7%. Then, in the samples incubated with magnesium, an increase in the number of CD38 + cells on day 2 (91 ± 8.92%) was revealed with a subsequent decrease to the end of the observation period (28 ± 2.4%). Upon contact with titanium, the maximum CD38 + cell content was observed only for 9 s of incubation and their value was 82 ± 7.8%. The above data indicate the presence of the inducing effect of implants on the maturation of DC, depending on the contact time, and more magnesium than titanium. Treatment of pathogenic associated molecules generated a population shift from a precursor DC phenotype, traditionally CD14 + and HLA-DR + , to an increased number of immature and mature DC phenotype CD14 − /HLA-DR +low , CD14 − /HLA-DR +high , respectively. CD14 + DCs express C-type lectin DC-specific intercellular adhesion molecule (ICAM)-3-grabbing nonintegrin (DC-SIGN, CD209), which may also be found on monocyte-derived DCs, especially those generated under tolerogenic conditions such as IL-10. Analysis of the stimulated CD14 − / HLA-DR + population demonstrated significantly enhanced expression of mature DC-specific marker CD83, CD209 and the costimulatory molecule CD86 after contact with calcium phosphate PEO-coating implants compared to unstimulated controls and was similar to cultures stimulated with LPS ( Figure 4C). There is an approximate 3-fold increase in DC-specific maturation marker CD83 expression on DCs treated with Ti 2 and Mg 2 over those given for cells on titanium (p < 0.05). This shift can be explained by stimulation of the resident cells population exposure properties of calcium-phosphate coating and the presence of molecular point-like effects on receptors. The increase in CD86 and CD123 indicate the DCs maturing ability for the costimulation of lymphocytes, triggering their subsequent activation and proliferation, further suggesting the potential to drive such an adaptive response. Cytokine production of cells DCs are a unique antigen-presenting cell that can both participate in inflammatory reactions, by producing a variety of inflammatory mediators, and directly respond to the product of these innate pathways. The study of titanium Ti and its coating effect on cellular production of cytokines showed that the level of pro-inflammatory cytokines, much difference between the indices for intact cells and after their contact with the samples, was found for the two cytokines TNF α and regulated on activation of cellular expressed and secreted RANTES (Figure 5). The greatest number of cytokine producing was in contact with titanium Ti 1 and a lower number in contact with calcium phosphate-coated Ti 2. This dependence was established in relation to anti-inflammatory cytokines production by cells-interleukin 6, 10, and 12. These data indicates that, in comparison with other studied samples, the smallest immune stimulatory effect applies to the calcium phosphate coated Ti 2. Discussion During the maturation of the DC, the endocytic potential, the degree of expression of antigenrecognizing receptors decrease, and, on the opposite, the expression of adhesion molecules, which are bound to the plasma membrane, increases. The tolerant properties of highly differentiated DCs in triggering an immune response are manifested not only by their ability to present antigens to lymphocytes, but also by unique migration properties, that allow them to attach antigens to various tissues of the organism and transport them to regional lymphoid organs. In addition to the changes in cell organelles morphology, the activity of lysosomal enzymes and immunoproteas, which process the intracellularly synthesized protein antigens, is increased, the production of pro-inflammatory cytokines and various growth factors, through which T-lymphocytes and connective-tissue cells are activated [1,10,34]. These unique properties allow DC to be important participants in the process of bone tissue regeneration as initiators of osteolysis, by activating the differentiation and maturation of osteoclasts. From this point of view, the property of bioresorbable magnesium implant, revealed in our study, has a more pronounced effect, in comparison with bioinert titanium, on the process of directed differentiation and maturation of the DC, which is of a particular interest to us. The data obtained by us indicate the identical effect of the implants on the expression of the CD34 adhesion receptor of hemopoietic cells, and the percentage of these cells' content, reduced relative to control factor, to their expressed effect as inducers of cell maturation. The cited data indicate a pronounced effect of the magnesium implant and calcium phosphate coated on the directionality of hematopoietic pool cell differentiation, mainly toward the DC, in comparison with titanium. Despite the fact that at the initial time of observation, LPS activated DC maturation more than implants, over time the level of expression of co-stimulatory molecules on the cell surface under the influence of titanium and magnesium has increased. The CD38 receptor appears on CD34 + committed stem cells and specific progenitor cells of lymphoid, erythroid and myeloid cells. It is considered that CD38 expression persists only in lymphoid progenitor cells during the early stages. The above data indicate the presence of the inducing effect of implants on the maturation of DC, depending on the contact time, and more magnesium than titanium. The phenotype and morphology of DCs was differentially modulated by metal and calcium phosphate coated surfaces. Specifically, although the expression levels of DC maturation marker, CD83 and HLA-DR were not altered significantly, calcium phosphate coated treatment of DCs induced higher co-stimulatory molecule, CD86, expression relative of iDCs (Unstimulated). DC treatment with Ti 1 did not affect CD86 expression as compared to iDCs, presumably promoting a noninflammatory environment. Was showed that CD86 is the most sensitive marker for DC response to biomaterial treatments and is a valid variable for determining DC maturation levels [31]. Furthermore, DCs contacted with calcium phosphate coated exhibited much more extensive dendritic processes, a morphology associated with maturated. Consistent with the CD86 expression results, DCs incubated with Ti 1 and Mg 1 possessed a rounded morphology that is associated with immatureted. Despite the non-stimulating nature of metal implants cells were able to fully mature upon LPS challenge (data not shown). The bio-anodized surface contains Ca and P ions incorporated from the electrolytic solution improves biological properties of metal implants [35,36]. The developed unique electrolyte composition and the formation method helped the creation of biologically active PEO coating on the surface of titanium and magnesium, which might affect the occurrence of aseptic inflammation. The results presented herein that calcium phosphate coated contacted DCs were non-stimulating indicated the importance of surface porosity as a material property that modulates DC phenotype and enzymes activity. Whereas maximum stimulation of cellular enzymes associated with the adhesion on surface within the first hour and a difference between indices for coated titanium with less stimulating effects as compared with titanium. Most expressed stimulation of the cytochrome oxidase in DCs in contact with a hydroxyapatite coated was established. DC in the inflammatory response by regulating cytokines such as nitric oxide and pro-and anti-inflammatory cytokines including the development of inflammation in the tissue surrounding the implant [37]. Cytochrome oxidase and SDH are the main components of the normal aerobic oxidative system of the tissue cells that are also known as the succinate dehydrogenase complex, where SDH is the first component and cytochrome oxidase is the second. Oxidized cytochrome oxidase is reduced by cytochrome c catalyzing the transfer of four electrons to the oxygen molecule. The cytochrome oxidase activity in the cells reflects the level of oxidative metabolism. This enzyme contains cytogemmin with which molecule of NO communicates. In this case, at interaction of super oxygen anion with NO is formed peroxynitrites-the powerful oxidizer capable inhibits activity of mitochondrial enzymes cell. Definition of activity this enzyme allows indirectly estimating ability of cells to production NO on nitrite reductase ways [38]. Reduced cell response upon contact with the coating indicates to better properties with respect to biocompatibility as compared to metal implants. In general, these data indicate to ambiguous reaction of cell in contact with coatings and property of metal. In addition, DCs contacted with samples surfaces produced differential cytokine profiles. Contrary to the high expression level of CD86 and dendritic morphology, calcium phosphate coated and Mg 1 contacted DCs released higher amounts of anti-inflammatory cytokine, RANTES, IL 1β, and IL 6, compared to immature DCs or Ti 1 treated DCs. Although some trends in the release of TNF-α and IL-12 were observed, the differences were not statistically significant. A wider array of cytokines and chemokines were subsequently analyzed in order to better delineate the cytokine responses upon DC treatment with Ti surfaces. Treatment of DCs with calcium phosphate coated on Mg promoted enhanced production of the chemokine RANTES of the mediator acute and chronic inflammation, compared to immature DCs, and to a level similar to LPS treated mature DCs. Unlike titanium the magnesium after a certain time interval in the insertion into the injury site, is resorbed by osteoclasts and other professional phagocytes and the highly-differentiated DCs, already presented at the site of aseptic inflammation, are of a great importance. These cells, due to the fact that they have already acquired the properties of highly specialized antigen-presenting participants in the process of elimination of undesirable components of inflammation, can also have an indirect effect on the process of osteosynthesis by producing a variety of factors, including cytokines, into interstitial space to attract connective tissue cell elements. The relatively low degree of activating effect of the titanium implant on the DC confirms its property of bioinertness in relation to the immune system, which indicates its positive qualities as a material that continuously stays in the organism. DC participates in the inflammatory response by regulating cytokines such as nitric oxide and pro-and antiinflammatory cytokines. The materials of coating exhibited better biological compatibility than metal implants. The immunomodulatory properties of currently available implant coatings need to be improved to develop personalized therapeutic solutions. DCs exposed to the implantable materials ex vivo can be used to predict the individual's reactions and allow selection of an optimal coating composition, that take prospects for use of this cells for diagnostic and therapeutic approaches to personalized implant therapy. Conclusions Calcium phosphate-coated surfaces have a very similar chemical composition, but differs a kind of metal substrate with different properties; titanium is bio inert and magnesium is bioresorbable. The comparable levels of CD86 expression for DCs contacted with Ti 2 or Mg 2 surfaces suggested that kind of metal substrate is not crucial in modulating DC phenotype. The calcium phosphate coated surfaces have the same roughness, were prepared to retain their high surface energy by plasma electrolytic oxidation and were treated in the unipolar PEO-mode. In this study, different of surfaces metal implants and coated were shown to induce differential DC phenotype upon treatment. DCs treated with calcium phosphate surfaces exhibited a more mature phenotype, whereas DCs treated with Ti 1 and Mg 1 surfaces maintained an immature phenotype. These results indicate another benefit of metal surfaces for promoting bone formation and integration by providing a local noninflammatory environment. Furthermore, calcium phosphate surfaces indicated possible material property-DC phenotype relationships for implant design. There is mounting evidence to suggest the involvement of the immune system by means of activation by metal ions released via biocorrosion, in the pathophysiologic mechanisms of aseptic loosening of orthopedic implants. However, the detailed mechanisms of how metal ions become antigenic and are presented to T-lymphocytes, in addition to how the local inflammatory response is driven, remain to be investigated.
9,286.2
2018-11-05T00:00:00.000
[ "Medicine", "Biology" ]
Quantum-induced trans-Planckian energy near horizon We study the loop effects on the geometry and boundary conditions of several black hole spacetimes one of which is time-dependent and analyze the energy measured by an infalling observer near their horizons. The finding in the previous works that the loop effects can be drastic is reinforced: they play an important role in the boundary conditions and non-perturbative geometry deformation. One of the channels through which the quantum gravitational effects enter is generation of the cosmological constant. The cosmological constant feeds part of the time-dependence of a solution. We obtain a transPlanckian energy in the time-dependent case. The importance of time-dependence for the trans-Planckian energy and black hole information is discussed. Introduction The remarkable developments of astrophysical observations such as the detection of the gravitational wave [1] or the Event Horizon Telescope project [2] could offer valuable guidance to the correct formulation of the theory of quantum gravity (see, e.g., [3][4][5][6][7] for various reviews from theoretical and observational points of views). Black holes provide an optimal arena for studying quantum gravitational physics: they are mathematically simple at the classical level yet require quantization for a complete and proper understanding. In particular, the black hole information problem 1 poses a challenge that will, once surmounted, take us to the next chapter of understanding astrophysical black holes at a more fundamental level. Sometime ago interest in the black hole information problem was renewed by the Firewall argument [17,18] followed by various debates. One of the facts brought homeperhaps more systematically than ever -by the Firewall observation is that our understanding of black holes and gravitational physics as a whole is as yet incomplete. The Firewall proposal has challenged, among other things, the conventional view that a freefalling observer would not experience anything out of the ordinary when passing through the horizon: the observer should encounter trans-Planckian energy radiation. We have recently proposed in [19] that quantum gravitational effects should be responsible for the JHEP05(2018)167 production of high energy radiation, and ultimately, may well hold the key to the information puzzle. Although the study of the information problem has a long history, two critical ingredients that could have led to a firmer grip of the problem had not, in the past, been quantitatively taken into account in the way they have been in our recent and present works. They are quantum gravitational effects and non-Dirichlet boundary conditions. We will examine them in detail in the main body by taking three cases of black hole spacetimes and continuing the endeavor initiated in [19,20] and [21]. In particular, we analyze the quantum gravity-induced energy measured by an infalling observer near each horizon. In the past, the apparent loss of information led to suspicion on a certain unknown information bleaching mechanism and the potential relevance of the quantum gravitational effects on the information problem was considered in the literature; see the review [8]. However, the idea was not pursued at a quantitative level (presumably because of the difficulty of seeing what process could possibly be responsible for such bleaching). In [19][20][21][22], we have unraveled a potential mechanism: one facet of the quantum gravitational effects should be as an information bleaching process. The issue of the boundary conditions in gravitational physics seems profound. (See, e.g., [23][24][25] for progress in boundary conditions and dynamics.) It awaits a more complete and comprehensive treatment in the context of quantization. A meaningful observation on the boundary conditions as a crucial component of the Hilbert space has recently been put forth in loop quantum gravity works [26] and [27]. Although the widely-used Dirichlet boundary conditions have been successful in non-gravitational areas, narrowing down to the configurations with these boundary conditions in a gravity theory results in wipe-out of much of the information as demonstrated in the recent works [19,22,28,29]. The surprising fact that the Dirichlet boundary condition is at odds with the information of the system seems quite sophisticated, and must be due to the fact that the physical states of a gravitational system happen to have their support at the boundary hypersurface, the holographic screen, which in turn has its origin in the large amount of the gauge symmetry of a gravitational system [30]. The aforementioned two ingredients are not independent but intricately intertwined. As we will show, there exists a close influence of the quantum effects on the boundary conditions and geometry, especially the time-dependent one. The influences among these entities are mutual though we will take the quantum effect-centered view. The influence of quantum effects on the geometry is quite natural [19,21]. The way the boundary conditions figure into the mutual relations has just been recognized [28,29]. One of the focuses of the present work is the manner in which the quantum effects and boundary conditions feed the time-dependence of a solution. There are several routes to probing the perturbative loop effects on the geometry and physics. One of the approaches that can be taken with a reasonable amount of calculations is to study the deformation of the geometry analyzed through the 1PI effective action [31,32]. (See, e.g., [33] for a review of the 1PI effective action in the gravity context.) A more effortful direction would be the one based on a wave-packet -more in the conclusion. If one additionally works out the geodesics, it is straightforward to calculate the energy measured by an infalling observer, although the algebra involved is usually heavy. JHEP05(2018)167 In [21], one-loop correction terms in the 1PI action were examined to see whether they would lead to a trans-Planckian energy when evaluated in a time-independent background, a Schwarzschild-dS background. They did not. The analysis was then repeated for the time-dependent quantum-level background; there it was revealed that they do yield a trans-Planckian energy. 2 One of the key lessons learned through those (and the present) analyses is that there exist circumstances, such as when nonperturbative physics are relevant, where the quantum gravitational effects cannot be set aside as small. In the conclusion, we will argue that such circumstances must be quite common rather than exceptional. Two questions may arize. Firstly, in the case of the time-independent solution, could it be the high degree of symmetry of the solution that suppresses the trans-Planckian behavior? The time-independent background with less symmetry should be worth examining. Secondly, one may wonder whether or not the fact that quantum effects feed a time-dependent solution would persist in other cases. Put differently, how generic is the existence of time-dependent solutions fed by quantum effects? These questions will be addressed in the main body. The paper is organized as follows. In section 2 we start with some of the salient features of quantization of gravity recently proposed in [30,35,36]. The quantization procedure generically leads to a quantum-corrected and/or -generated cosmological constant that in turn has a significant impact on solution generation: its presence contributes to a qualitatively different -in the sense that the solution is time-dependent -solution. This is a non-perturbative effect, though the 1PI action is obtained perturbatively, through the back reaction of the metric and matter fields. Certain conceptual as well as technical aspects of the quantization procedure are essential: they not only provide the foundation on which the subsequent analysis is laid but also reveal some crucial aspects of the cosmological constant. The main theme of the present computation is the energy measured by an infalling observer. Because the tasks involved require intensive analyses, we illustrate the procedure with a simpler background, a Schwarzschild-Melvin solution. Then we consider another more complex stationary background; it is the recently constructed generalization of Schwarzschild-Melvin solution [37]. After recalling the findings in the previous works of [28,29] and [21], we consider a time-dependent black hole spacetime in section 3. It is an extension of the time-dependent black hole solution previously obtained in [38]. The same kind of the trend as observed in [21] is also observed: whereas the classical terms do not give the Firewall energy, the quantum effects do lead to a trans-Planckian energy. In the conclusion, we end with further remarks and future directions. Time-independent cases In this section we demonstrate the steps of the energy computation with time-independent black holes. For calculating the one-loop-corrected energy measured by an infalling observer, one needs to obtain the one-loop geodesic as well as the stress-energy tensor in the background under consideration. This has been carried out in [21] for the case of a JHEP05(2018)167 Schwarzschild-dS background: although the Schwarzschild-dS background itself did not, at least at one-loop, lead to a trans-Planckian energy, its time-dependent quantum extension did lead to a trans-Planckian energy. Here we are to look into the possibility that a time-independent solution with less symmetry might lead to a trans-Planckian energy. The result again yields a negative answer up to a certain subtlety that we will discuss below (and we turn to a time-dependent case and find an affirmative answer in section 3). Let us consider the Einstein-Maxwell action, The metric field equation is where G is the Newton's constant with κ 2 ≡ 16πG; the Einstein tensor and the stressenergy tensor are defined respectively by and One-loop stress-energy tensor Although presenting a thorough analysis of the quantization procedure is not one of the present goals (because only the final outcomes will be needed for the analysis in the subsequent sections 3 ), it will be useful to have a quantum-level perspective. Before getting into the interwoven relationships among the quantization procedure, boundary conditions, loop effects and time-dependent solutions, we start with a brief account of the salient features of the quantization for the Einstein-Maxwell system. The content of this section is essential for the correct overall picture. The quantization procedure brings to light a number of conceptual and technical issues not perceived in the past. Let us start with the boundary terms and conditions. The issue of the boundary conditions has recently turned out to be much subtler than previously thought. The surface terms are important in several ways both at the classical and quantum levels. Here we focus on their quantization-related aspects, returning in what follows to the (better-known) subtleties in the definition of the classical stress-energy tensor. In conducting the action principle, one normally adds Gibbons-Hawking-type boundary terms by way of imposing the Dirichlet condition. It has recently been revealed that the Dirichlet boundary condition is just one of the possibly many boundary conditions to be collectively 3 Not all the steps of the quantization scheme of [30] are needed because we are only interested in the one-loop analysis. For example, reduction of the physical states is not necessary to establish the one-loop renormalizability: the conventional method is sufficient in the presence of the cosmological constant [36]. Also, it is not obvious whether or not the quantization scheme could be applied to the three backgrounds considered in this work since the time-independent backgrounds are not, for example, asymptotically flat; more work is required to settle this matter. JHEP05(2018)167 considered for the sake of a proper treatment of the entire Hilbert space. The status of this matter has several implications. One obvious implication is that it is now necessary to explore other types of boundary conditions. For instance, it was illustrated in [39] with the Einstein-Hilbert action that the boundary terms can be removed by the physical state conditions. Since this could be achieved without adding the Gibbons-Hawking term, the boundary condition was not restricted to the Dirichlet. Another not-so-obvious implication is that one must check whether or not the classical-level boundary conditions are honored by the quantum corrections [28]. (In section 3 we will push further along this direction.) A quite serious technical obstacle in the effective action computation is the complexity of the propagators associated with the curved backgrounds: 4 they are known in closed forms only for a very limited number of cases. Thus it is difficult to conduct the perturbation theory around the actual curved background under consideration. Although not a stalemate, it makes it necessary to employ some additional measures such as covariance and dimensional analysis in order to determine the forms of the terms in the 1PI effective action. Also, since we are mostly interested in the ultraviolet divergences the flat space propagator can be employed to capture them. One recent undertaking was the construction of the propagator out of the traceless components of the fluctuation metric [40]. The necessity of employing the "traceless" propagator is that the 4D covariance is maintained only when the traceless propagator is employed [40]. An earlier related observation can be found, e.g., in [41]. The construction of the traceless propagator has been achieved in a manner convenient for the perturbative analysis. For a gravity-scalar system, the explicit one-loop analysis via employment of the traceless graviton propagator has been carried out in [36]. Similarly, the forms of the counter-terms in the case of the Einstein-Maxwell can be rather easily determined by a combination of direct computation, dimensional analysis and 4D covariance. With all these, one important aspect of the quantum effects is that the cosmological constant term is quite generically generated, regardless of the background under consideration [40]. At the quantum level, the stress-energy tensor computation should be done by starting with the renormalized action: where the renormalized quantities are indicated by the subscript r. After the one-loop analysis, the form of the 1PI effective action with the counter-terms takes (see [40] for details; 5 earlier related analyses can be found, e.g., in [42][43][44]) To make matters worse, the effective action contains nonlocal terms in general. Such nonlocal terms could be important for the black hole physics at hand [32]. They will not be considered in the present work for simplicity. 5 The counterterm computation of the Einstein-Maxwell action was of course done long ago. However, our recent finding shows that the correct determination of the coefficients requires use of the traceless propagator. JHEP05(2018)167 where c's are constants -whose explicit values are not important for our purposethat can be determined once the renormalization conditions are fixed. 6 The cosmological constant has a purely quantum origin since the classical part was absent in (2.5). The part in the ellipsis contains the correction terms involving the Maxwell fields as well. In the presence of the cosmological constant, the one-loop renormalizability can be established along the line of the conventional framework. Above, the Riemann tensor square term, R µνρσ R µνρσ , appearing among the one-loop terms has been replaced by R µν R µν and R 2 through the Euler-Gauss-Bonnet topological identity As for the stress-energy tensor, there have been longstanding debates, even at the classical level, on its definition (see, e.g., [45,46]). In general the surface terms matter for the stress-energy tensor, and they are responsible for part of the complications associated with its definition. A systematic treatment of the surface terms deserves work dedicated to itself and we will not attempt it here. One subtlety not as complicated is whether or not the one-loop-generated terms such as R 2 , R 2 µν should be included in the stress-energy tensor on the right-hand side of the metric field equation. Considering the Bianchi identity associated with the Einstein tensor, it seems reasonable to place all of the quantum correction terms together with the matter part: 7 the stress-energy tensor is obtained by taking the functional derivative of the matter part of the action with respect to the metric: In what follows we consider two backgrounds. The first is a Schwarzschild-Melvin solution [48]; the second is the recently found generalization of the Schwarzschild-Melvin solution [37]. Because the latter solution is more complex and less symmetric than the Schwarzschild-Melvin solution, it should provide a good test bed for one of the questions raised in the introduction. (The third case, to be considered in section 3, is an extension of the time-dependent AdS black hole analyzed at the classical level in [38] and at the quantum level in [28] and [29].) Schwarzschild-Melvin case The Schwarzschild-Melvin solution of the action Although the divergences can be determined by using a flat space propagator, the proper curved space propagator must be employed for the finite parts of the Feynman diagrams. The finite parts of the renormalized coefficients can then be fixed with a specific choice of a set of the renormalization conditions. 7 This is also consistent with the definition of a stress-energy tensor given in [47] in the context of the higher derivative gravity. JHEP05(2018)167 represent a Schwarzschild black hole immersed in an external magnetic field. It is given by the vector field is given by Since the solution represents a black hole inside a magnetic field of an infinite extent, it will be physical only when the combination B κ is small. The coordinates (t, φ) are cyclic and lead to the following first integrals: where the dot means˙≡ d/du; E, l are constants representing the conserved energy and angular momentum. The geodesic U µ satisfies the normalization U µ U µ = s where s is s = 0, −1 for null and timelike geodesics, respectively. The remaining second-order geodesic equations are presented in appendix. The normalization can be written as In principle, one should compute the geodesic up to (and including) the first subleading order in κ 2 . For the leading order, however, the quantum correction piece of the geodesic does not contribute when contracted with the stress-energy tensor, and one can therefore focus on the classical geodesic equations. Let us consider the θ = π/2 case for which the equation above becomes substantially simplified. (The qualitative conclusion on the energy is not expected to change in more general cases.) With θ = π/2, eq.(A.5) is satisfied and one can shoẇ We are now up to the task of computing the energy density as measured by a free-falling observer: where U ρ K denotes the four-velocity of an infalling observer in the Kruskal coordinates. T µν ≡< K| T K µν |K > denotes the quantum-corrected stress tensor (2.8) (reviews on the quantum-level stress tensor can be found in [49][50][51][52]): T K µν represents the operator corresponding to the classical stress-energy tensor and |K > denotes the Kruskal (i.e., Hartle-Hawking) vacuum. 8 JHEP05(2018)167 Let us examine the terms in (2.8) to see whether or not they yield a high energy upon being contracted with the four velocities. Although the cosmological constant term comes with 1 κ 2 , its contribution to ρ should be small because of the small value of Λ. Let us first consider the matter sector of the stress-tensor. Upon evaluated at the classical background, the F 2 term in the stress-tensor yields and thus vanishes as r → 2M . As for the F µρ F ν ρ term, one can show that Most of the gravity sector terms either identically vanish or vanish at the horizon. For example, one gets when evaluated at the classical background. As previously stated, the configuration is physical only when B 2 κ 2 is small and the energy encountered by an infalling observer will be moderate. Generalized Schwarzschild-Melvin case A new black hole solution with an asymptotically uniform magnetic field has been constructed in [37] by utilizing the so-called lightcone gauge. It is a two-parameter generalization of the Schwarzschild-Melvin solution and reduces to the Schwarzschild-Melvin spacetime in a certain parameter limit. Evidently it is more complex than the Schwarzschild-Melvin solution and should provide a test bed for examining the potential presence of the trans-Planckian energy. The solution is obtained as a perturbation around the Schwarzschild black hole 9 ds 2 = −f (r)dv 2 + 2dvdr + r 2 dΩ 2 (2) with f (r) = 1 − 2M r : Just as in the Schwarzschild-Melvin case, it is useful to find the first integrals. For this, note that the coordinates t, φ are again cyclic, leading to the following first integrals: JHEP05(2018)167 where again˙≡ d/du and E, l are constants. The remaining geodesic equations can be found in appendix A. The geodesic normalization condition reads, to the first order in the perturbed metric, Things get quite simplified by choosing θ = π/2; after some algebra (more details in appendix) one getṡ With the help of the Mathematica package diffgeo.m, it is checked that the computation of ρ does not yield a trans-Planckian energy in this case. For example, one gets, for the F µρ F ν ρ term, As before the presence of the small parameter B 2 κ 2 makes this contribution small. Trans-Planckian energy Although a more systematic and complete study of boundary conditions is still to be carried out in gravity quantization, it is nevertheless possible to probe the role of the boundary modes in the dynamical evolution of the system. In this section we deepen our understanding of the case whose analysis has been carried out to some extent in [28] and [29]; it was shown that the quantum gravitational effects and non-Dirichlet modes (to be defined) lead to a time-dependent solution. After reviewing [28] and [29] in section 3.1, we extend the analysis by focusing, for one thing, on the loop-corrected cosmological constant. The trans-Planckian energy does not arise at the classical level. This very fact may not be so surprising. However, the detailed manner in which this happens is surprising. We show that the quantum-level solution does display a trans-Planckian energy. The classical action we consider in this section is It admits an AdS black hole solution, Time-dependent solution of gravity-scalar system A gravity-scalar system was considered at the quantum level in [28] and [29]. The one-loop 1PI effective action after one-loop renormalization of the classical action (3.1) is where e's are constants that can be fixed with fixed renormalization conditions. The metric and scalar field equations that follow from (3.4) are The field equations above can be solved by employing the metric ansatz [38] with the following quantum-corrected series where the modes with superscript 'h' represent the quantum modes. The quantum corrections of the metric imply a deformation of the geometry by quantum effects [19,21]. (See also [10,53,54] for related works.) Similarly, for the scalar field, It was found that the Dirichlet boundary condition is not preserved by the quantum corrections. Different boundary conditions can be adopted by adjusting the boundary modes. For example, one imposes Φ 0 (t) = 0, Φ h 0 (t) = 0 and Φ h −1 (t) = 0 for the Dirichlet boundary JHEP05(2018)167 condition. The modes such as Φ 0 (t), Φ h 0 (t), Φ h −1 (t) will be called the "non-Dirichlet modes" for this reason. The following choice -which corresponds to a non-Dirichlet boundary condition -was explored: Φ 0 (t) = 0 , Φ h 0 (t) = 0 (3.9) By analyzing the field equations expanded in the z-series, one can show, for the classical modes, for quantum modes An intriguing finding was that the quantum-level analysis actually imposes additional constraints on the classical modes. (The field equations have the terms of order since all of the coefficients e's in (3.5) come with , and once the series ansatze (3.7) and (3.8) are substituted, the classical modes such as ζ 1 , ζ 2 come to appear in the parts of the equation of order, which leads to additional constraints among the classical modes. We will come back to this in the conclusion.) Since this is an important point we elaborate: according to the classical analysis [38], the modes ζ 1 , ζ 2 are free and responsible for the entire dynamics as the higher modes are given in terms of ζ 1 , ζ 2 and their derivatives. However, the quantum-level analysis unravels that the two modes become constrained: On the contrary, the quantum-counterpart modes, ζ h 1 and ζ h 2 , are not constrained. As a matter of fact, with Φ 0 (t), Φ h 0 (t) and Φ h −1 (t) they determine the higher modes; namely, the higher modes become functions of these modes. Let us pause and ponder the implications of the results. Firstly, the solution represents the quantum-modified time-dependent black hole solution, and the quantum modes above are the ones that feed the time-dependence of the solution. Secondly, the presence of such modes implies that the quantum-corrected solution no longer satisfies the Dirichlet condition. Their presence also implies nontrivial dynamics on the boundary where part of JHEP05(2018)167 the system information is stored. The third implication is perhaps even more intriguing. The time-dependence of the classical black hole solution with a Dirichlet boundary condition is an apparent phenomenon: were it not for the presence of the quantum modes, the quantum-level constraints force the solution to reduce to a time-independent configuration, namely, an AdS black hole when the classical non-Dirichlet mode Φ 0 is absent. Before we proceed, let us note a curious resemblance to the finding in [21] where the time-dependent solution constructed in [55] was checked against a trans-Planckian energy. There, elimination of the cosmological constant term made the time-dependence disappear. In the case of [28] and [29] just reviewed, what feeds the time-dependent solution is the non-Dirichlet modes. As we will soon see, it is not only the non-Dirichlet modes but also the quantum-corrected cosmological constant that feeds the time-dependence which in turn will be crucial for the trans-Planckian energy. Extension by quantum cosmological constant The analysis in [28] and [29] did not take the quantum corrections of the cosmological constant. In other words, the cosmological constant Λ was taken to be entirely classical. Here we extend the analysis by focusing on the effects of the loop-corrected cosmological constant, writing it explicitly as Λ ≡ Λ 0 + κ 2 Λ 1 with Λ 0 , Λ 1 classical and quantum, respectively. With this, slightly modified mode relations are obtained; although the modifications are modest, the implications are not insignificant and several interesting aspects of the dynamics are revealed. For instance, the quantum-induced cosmological constant contributes to the time dependence of the solution. The procedure of solving the field equations goes the same apart from having to include the quantum correction piece of the cosmological constant. For the classical modes, one gets for the quantum modes, Several salient features of the outcome are as follows. The result above shows that in order for, e.g., F 2 not to vanish, the presence of the non-Dirichlet mode Φ 0 (t) is important. JHEP05(2018)167 To see things more clearly, let us set the entire quantum modes aside. As can be seen from (3.13) there still exists a time-dependent solution (that can be consistently extended to the quantum level) if one keeps the non-Dirichlet mode Φ 0 (t). The Dirichlet condition tends to suppress the time dependence: suppose the quantum mode Φ h −1 (t) and its deriva-tiveΦ h −1 (t) are absent. Then F h 1 (t) would vanish if Φ 0 (t) is absent as well. This shows that the non-Dirichlet modes and the quantum corrections together feed the time-dependence of the solution. (More on the non-Dirichlet modes in the conclusion.) Also, Λ 1 contributes to F h 2 ; this shows that the quantum-induced cosmological constant too contributes to the time dependence of the solution. The following will be important for the energy analysis in the next subsection. As stated in the previous subsection the classical time-dependent solution of [38] is 'demoted' to an AdS black hole by the quantum-level constraints. The classical-level time-dependence of the solution of [38] is not preserved at the quantum level: the quantum-level constraints force the classical part of the resulting solution to become an AdS black hole that is timeindependent at the classical level. Put differently, additional constraints among the classical modes arise at the quantum level. Once those constraints are enforced on the classical part of the solution, the classical metric becomes that of the usual time-independent AdS black hole. Trans-Planckian energy Let us compute the energy density measured by a free-falling observer, ρ ≡ T µν U µ U ν , where T µν denotes the quantum-corrected stress tensor (2.8) and U µ the geodesic. As in section 2, we first work out the geodesics. The geodesics for the classical AdS black hole can be used for the purpose of computing ρ for the reason which will become clearer below. The stress-energy tensor must be evaluated at the quantum-corrected solution. Since we are ultimately interested in the energy near the horizon, we will, at some point, consider the solution in the z − z EH series where z EH denotes the location of the classical horizon. The classical part of the solution obtained in the previous subsection is time-dependent in general due to the presence of the non-Dirichlet mode Φ 0 (t), and this causes unnecessary complications in finding the geodesic. We thus choose Since ζ 1 = 0 = ζ 2 , the classical part of the full quantum-level solution is the same as the well-known one given in (3.2). (Nevertheless, the overall solution will be a time-dependent one due to the presence of the time-dependent quantum modes.) Although this is a special case, it is expected to share the important features of a more general solution when it comes to the trans-Planckian scaling of the energy. With this, the classical geodesic can be computed straightforwardly. From the metric of the AdS black hole 10 10 We have set Λ = −3 by following the common practice in the literature. JHEP05(2018)167 where M is a parameter proportional to the mass of the black hole, the first integrals follow: With these, the velocity normalization condition, U 2 = s, takeṡ and one getṡ The one-loop stress-energy tensor is given by Let us focus on the leading order terms in the first line. (As before we disregard the cosmological constant term in the stress tensor.) The second term in the first line is bound by the geodesic normalization, U µ U µ = s, thus of subleading order. Given the structure oḟ t above, the first term, namely the scalar kinetic term ∂ µ ζ∂ ν ζ can potentially yield a large value of the energy. In other words, at least naively, a large value of the energy is expected to come from theṫ components of ρ sinceṫ scales asṫ ∼ 1 1−2M z 3 and the classical horizon z EH is located at the vanishing of 1 − 2M z 3 , z 3 EH ≡ 1 2M . It is possible at this point to see why a classical, as opposed to one-loop, geodesic is sufficient for our purpose, a statement made earlier. Let us examine the contribution of the first term in (3.20) to ρ, ∂ µ ζ∂ ν ζ U µ U ν . Withṫ ∼ 1 z−z EH it is theζζṫṫ piece that will give the leading order energy. From this it follows that the classical part of the geodesic is sufficient for obtaining one-loop ρ: because the quantum-level field equations constrain ζ 1 , ζ 2 such that ζ 1 = 0 = ζ 2 , the time-dependent part of the solution for the field equations is only the quantum correction piece. Since the stress-energy tensor part -namely ∂ µ ζ∂ ν ζ -is already second order in (that we have been suppressing) and κ 2 , the geodesic for the classical AdS black hole is sufficient. For the remainder of this subsection, we examine the κ-scalings of various quantities to determine the scaling of the energy. At least to the orders analyzed in [29] and reviewed above, the classical piece of the scalar field is absent: the original scalar field expansion reduces, on account of (3.10) and (3.11), to where the modes ζ h 1 (t), ζ h 2 (t) are free (i.e., unconstrained) and the expression for, e.g., ζ h 3 (t) can be found in (3.14). The vanishing of the classical piece will bear important implications JHEP05(2018)167 for the energy so we run a double-check to ensure that it remains true to all orders in z, not just to the first several orders explicitly checked. To this end and for a more transparent understanding of the behavior of the scalar near the horizon, let us re-expand the z-series solution in z − z EH . The re-expansion of (3.22) around z EH takes, a priori, Givenṫ ∼ 1 z−z EH , a potentially large value of the energy will arise from the term of the (z − z EH ) 0 order,ζ 0 (t). As for the quantum modeζ h 0 (t), it comes with a κ 2 factor and is set aside for now (we will come back to it below); let us focus on the classical modẽ ζ 0 (t). Since (3.23) is a re-expansion of (3.21),ζ 0 (t) will be given by sum of the original modes ζ n 's with n ≥ 0. By running the program that led to (3.13) and (3.14) but now in the new series, one can show thatζ 0 (t) = 0. 11 The fact thatζ 0 (t) vanishes implies that the vanishing of the classical part of the scalar solution, although established to the first several orders in the original z-series, remains valid to all orders. More specifically, the finding that the higher modes ζ n with n ≥ 3 are functions of ζ 1 , ζ 2 must remain valid to all orders, and thus all of the higher modes ζ n vanish. The fact that the matter part of the action comes at higher order of κ 2 translates into the form of the metric field equation where the matter part starts with at κ 2 order: This implies that the solution generically takes a form of where ξ represents the rescaled scalar field. Since the classical part identically vanish, ξ(t, z) has the following series: For this, it is convenient to introduce Z ≡ z − zEH (3.24) and rewrite (3.6) as Let us consider the scalar kinetic term in the stress-energy tensor and its contribution to ρ, ∂ µ ζ∂ ν ζ U µ U ν . At the classical level,ṫ scales aṡ The location of the horizon at the quantum level, z q EH , will take a form of and this impliesṫ ∼ O(κ −2 ) (3.33) at z = z q EH . With this scaling it is theζζṫṫ piece of ρ that will give the leading order energy. As z → z q EH , one gets Note that it is the "horizon quantum mode"ξ h 0 (t) that led to this trans-Planckian energy. What appears above is a time derivative ofξ h 0 (t); a time-independent modeξ h 0 (t) = const will not lead to a trans-Planckian energy. The boundary modes are the important part of the physical degrees of freedom and must hold part of the system information. They determine the bulk dynamics as analyzed in the previous subsections. More basically, they are the building blocks of the time-dependence and represent the boundary dynamics and deformations. The result above shows that being a part of the horizon mode, they are also linked with the trans-Planckian energy. Conclusion In this sequel, we have further explored the intertwined relationships among boundary conditions, quantum effects, and time-dependent solutions. Three black hole backgrounds have been analyzed: a Schwarzschild-Melvin black hole, its generalization obtained in [37] and the generalization of the time-dependent AdS black hole considered in [38] and [29]. A pattern similar to that of [21] has again been found: the non-Dirichlet modes and quantum effects are crucial for a quantum-modified time-dependent black hole solution. One of our main focuses is on the quantum-induced cosmological constant and it is shown that it is one of the agents that reinforce the time-dependence of the solution. 12 The trans-Planckian energy is obtained in the case of the time-dependent solution. JHEP05(2018)167 It is confirmed that the time-dependence of the solution is at odds with the Dirichlet boundary conditon. The boundary conditions are closely tied with quantization procedure. It is rather surprising that adoption of such an innocuous boundary condition as the Dirichlet leads to a (presumably highly) limited subset of the proper Hilbert space. This phenomenon is in no way elementary: one does not have an analogous phenomenon with a system where the metric is kept as non-dynamical. The limitation of the Dirichlet boundary condition has its origin in the fact that the physical degrees of freedom of a gravitational theory happens to be associated with the hypersurface at the boundary, and thus get suppressed by the Dirichlet boundary condition. We believe that the present work with the previous ones unequivocally shows that the quantum gravitational effects cannot in general be disregarded, especially in timedependent circumstances, since they can be important for nonpertubative physics. It has been shown that with the quantum-level constraints taken into account, the classical timedependent black hole solution "reduces" to the AdS black hole solution in the sense explained in the main body. Also, it is the quantum gravitational effects that lead to the trans-Planckian energy as demonstrated in the main body. 13 The phenomenon seen in (3.12) seems to have its origin in the subtlety of going to classical limit [56]. In the present case, the subtley is manifest as follows. As the -order parts of the field equations must vanish separately from the classical parts, one gets (· · · ) = 0 (4.1) Inside the parenthesis, some of the classical modes come to appear. If one takes → 0limit too early, the quantum-level constraint will be removed and this corresponds to the "usual" classical limit. As our analysis explicitly shows, the full quantum-level analysis can (and in our case, it does) introduce "order 1" changes to the classical solution through the constraints coming from the part represented by the ellipsis. Let us clarify another conceptual issue on matter-vs. graviton-loop effects. In the semiclassical limit only the matter fields are treated at the quantum level. This may seem to indicate that what's important for the trans-Planckian energy is the overall quantum effects -regardless of whether they come from matter or graviton fields -but not necessarily the quantum gravitational effects. This is not so. The loops of the matter fields introduce a cosmological constant term. Now one can consider the back reaction of the metric to the quantum-induced cosmological constant through the existence of the time-dependent solution. So strictly speaking, it is the quantum effects (regardless of whether they are the matter-or graviton-originated) plus the metric back reaction that are important for the trans-Planckian energy. The fact that one considers the metric back reaction reflects that the metric is dynamical. Once one considers dynamical metric and matter quantum effect, there is no rationale to exclude the graviton loop effect, hence the relevance of the quantum gravitational effects. Related to this, the following can be said. The AMPS argument in [17] is based largely on the semi-classical framework but nevertheless leads to the trans-Planckian energy. Their argument certainly contains the matter quantum field-theoretic JHEP05(2018)167 ingredient. The metric is perceived as dynamical and plays a dynamical role in the AMPS argument. By the same logic as above, the quantum gravitational ingredient is involved. The following are the questions that can be answered by further extending the line of our recent research. 14 The trans-Planckian energy results in a manner similar to that of [21] where the scalar field scaling as ∼ 1 κ led to the energy of order 1 κ 2 . Matter fields are present in both cases. It will be of some interests to explore the question of whether or not a matter field is required for a trans-Planckian energy in general, especially in the context of the distorted black hole solutions of [60] and [61]. We believe that the trans-Planckian energy will typically occur in time-dependent situations. Even the time-independent cases could actually translate into time-dependent cases in a more realistic framework where one would consider an infalling wave-packet in the second-quantized Schrodinger framework. In that approach to which the present approach should be complementary, one would take |vac > to be a certain type of wavepacket propagating in the background under consideration. The expectation value of the stress-energy tensor would be computed with respect to the "wave-packet vacuum." The onshell value of the Hamiltonian density will describe the spacetime-dependent energy density and the energy density around the packet will be time-dependent. This way, the time-dependence will be naturally built-in and we anticipate that the energy density will yield a trans-Planckian value around the packet as it approaches the horizon. The course of our recent research repeatedly points to the importance of the boundary dynamics in a gravitational theory. In the present work, it was the non-vanishing boundary mode ζ h 0 (t) (more precisely, the horizon modeζ h 0 (t)) that led to the trans-Planckian energy. More primarily, incorporation of various boundary conditions is necessary for correct identification of the whole Hilbert space of the theory [26,27]. The widely-used Dirichlet boundary condition may well be of measure zero among all possible non-Dirichlet boundary conditions. We have analyzed the issue of the Dirichlet vs. non-Dirichlet boundary condition in detail in [29]. It didn't appear possible to interpret the boundary condition of the quantum-level solution as a Neumann type. It might, however, be possible to interpret it as a Neumann-type up to peculiarities of an AdS spacetime. It will be of some interest to make this more accurate. Closely tied with the boundary condition is the question of the stress-energy tensor. The definition of the stress-energy tensor itself has a long history of debates. Most of these debates were on the definition of the stress-energy tensor at the classical level [45,46]. The quantization procedure poses additional subtleties. One of the most serious issues should again be the one associated with the boundary terms and conditions. A detailed analysis of the stress-energy tensor incorporating the works on the boundary terms such as [62] and [63] should be performed. We will report on the progress in some of these issues in the near future. 14 Another more serious issue in the perturbative analysis is the long-known gauge-choice dependence. It is a more fundamental question [57][58][59].
9,829.4
2018-05-01T00:00:00.000
[ "Physics" ]
A Study of Background Conditions for Sphinx--The Satellite-Borne Gamma-Ray Burst Polarimeter SPHiNX is a proposed satellite-borne gamma-ray burst polarimeter operating in the energy range 50-500 keV. The mission aims to probe the fundamental mechanism responsible for gamma-ray burst prompt emission through polarisation measurements. Optimising the signal-to-background ratio for SPHiNX is an important task during the design phase. The Geant4 Monte Carlo toolkit is used in this work. From the simulation, the total background outside the South Atlantic Anomaly (SAA) is about 323 counts/s, which is dominated by the cosmic X-ray background and albedo gamma rays, which contribute ~60% and ~35% of the total background, respectively. The background from albedo neutrons and primary and secondary cosmic rays is negligible. The delayed background induced by the SAA-trapped protons is about 190 counts/s when SPHiNX operates in orbit for one year. The resulting total background level of ~513 counts/s allows the polarisation of ~50 GRBs with minimum detectable polarisation less than 30% to be determined during the two-year mission lifetime. Introduction The Satellite Polarimeter for High eNergy X-rays (SPHiNX) is a proposed mission for a Swedish scientific satellite based on the InnoSat platform 1 , which supports a maximum payload mass of 25 kg and provides a payload power budget of 30 W. SPHiNX is a dedicated polarimeter for gamma-ray bursts (GRBs), the most luminous explosions in the Universe [1]. Long GRBs are generated by the collapse of massive stars [2], while short GRBs come from the merger of binary neutron stars or neutron star-black hole binary systems [3]. The recent detection of gravitational waves together with a short-duration GRB, from a binary neutron star system [4], highlights the importance of GRB measurements. Even though thousands of GRBs have been detected by, for example, the CGRO, Swift and Fermi 2 satellites, there are still many open questions about GRBs. The main scientific goal of the SPHiNX mission is to probe the fundamental mechanism behind the GRB prompt emission, by measuring the linear polarisation [5]. In contrast with energy spectrum and timing studies, there is a lack of reliable polarimetric data. Background plays an important role in the satellite-borne high-energy telescopes as they operate in the severe radiation environment above the atmosphere. Particles in the orbital environment 1 InnoSat System Requirements Document from OHB-Sweden, http://www.snsb.se/Global/Forskare/Utlysningar/InnoSat% 20System%20Requirements%20Document_IS-OSE-RS-0001_2C.pdf 2 FERMIGBRST-Fermi GBM Burst Catalog, which has been used in the performance simulation of SPHiNX, https://heasarc. gsfc.nasa.gov/W3Browse/fermi/fermigbrst.html Galaxies 2018, 6 interact with the detector, generating background indistinguishable from the signal data, resulting in a loss of sensitivity. The effect of background particles can be simulated with Monte Carlo software. Such simulations provide an efficient method to optimise the scientific performance of an instrument, for example, by maximising the signal-to-background ratio (S/B) [6]. This paper describes the background study for SPHiNX implemented with the Geant4 [7] Monte Carlo simulation toolkit, and the paper is organised as follows. Section 2 provides an overview of the SPHiNX mission and its properties. Section 3 presents the details of the simulations, and Section 4 presents results from the simulations. Section 5 is a brief summary of the work. Instrument Design and Properties SPHiNX is optimised for GRB detection, meaning that a large field of view (FoV) is needed. Compton scattering is the dominating physics process in the SPHiNX sensitive energy range (50-500 keV). The photon interaction cross-section is described by the Klein-Nishina equation as follows: where r 0 is the classical electron radius, ε = E /E, E is the initial photon energy, E is the scattered photon energy, θ is the polar scattering angle between the incident photon and the scattered photon, and ψ is the azimuthal scattering angle of the scattered photon with respect to the polarisation of the incident photon. From Equation (1), it is clear that the azimuthal scattering angle ψ will be modulated by the polarisation of the incident photons, that is, photons are preferentially scattered at angles perpendicular to the incident polarisation vector. This mechanism provides a sinusoidal modulation curve when reconstructing the azimuthal scattering angle recorded by the detector. The phase and amplitude of the modulation curve are proportional to the polarisation angle and fraction of the incident beam, respectively. Unlike the GRB polarimeter, POLAR [8], which makes measurements on-board the Chinese spacelab Tiangong-2, two kinds of detector material have been used for SPHiNX; one is the low-atomic-number plastic scintillator which has a large cross-section for Compton scattering, and the other one is the high-atomic-number GAGG (gadolinium aluminium gallium garnet) scintillator [9], which provides a high probability for photoelectric absorption. In the baseline design, plastic scintillators are read out using photomultiplier tubes (PMTs), and GAGG scintillators are read out using multipixel photon counters (MPPCs). As shown in Figure 1, inside the cylindrical shielding, seven hexagonal plastic scintillators are indicated in grey, and each of them is split into six pieces with a gap size of 1 mm in between. The gap is implemented to accommodate wrapping materials that are needed to maximise the collection of scintillation light. Each side of the plastic scintillators is surrounded by four pieces of GAGG scintillator which are shown in yellow, resulting in a total number of 120 pieces of GAGG. The ideal events for polarisation measurement are generated by photons that undergo Compton scattering in a plastic scintillator with a subsequent photoelectric absorption in a GAGG scintillator. The interactions are required to occur within a coincidence window with duration of a few hundred nanoseconds. In practice, all valid two-hit events can be used for the polarisation calculation, that is, plastic to plastic, GAGG to GAGG, plastic to GAGG and GAGG to plastic. One-hit events are foreseen to be used for spectroscopy and localisation of GRBs. Properties of SPHiNX are shown in Table 1. The field of view is defined by the incidence angle for which the effective area reduces to 50% of the value for an on-axis observation. Geant4 Simulation Geant4 [7] is a powerful toolkit for Monte Carlo simulations of particle interactions. The geometric mass model of the satellite, the input particle spectrum, physics models describing particle interactions and the selections on energy deposits are needed in order to determine background rates. Geant4 version 10.02.p02 3 has been used in this work. Geometric Model A geometric model of the SPHiNX satellite is built in Geant4 based on the mission baseline design. All important components are implemented, including the solar panel, scintillator detectors, photosensors, carbon fibre-reinforced polymer (CFRP) front window, electronics, shielding, support structure, and the InnoSat platform, as shown in Figure 2. A higher level of detail is implemented in the sensitive detectors and elements around them, for example, the cylindrical shielding comprising three layers of 1 mm of lead (outermost layer), 0.5 mm of tin (intermediate layer) and 0.25 mm of copper (innermost layer). For components with a complicated structure that lies far from the sensitive detector, the primary concern is to represent the correct mass and the dominant materials of the modelled objects. For example, for the InnoSat platform, structural 7075 aluminium alloy and the lithium battery have been implemented. For clarity, the CFRP window covering the instrument aperture is not shown. Space Radiation Environment The space radiation environment strongly depends on the orbit of the satellite. SPHiNX is foreseen to be launched into a low Earth orbit (LEO) with an altitude of 550 km and an inclination of 53 • . This is the lowest available inclination due to platform constrains. Outside the South Atlantic Anomaly (SAA), prompt background contributions are mainly the cosmic X-ray background (CXB); primary and secondary cosmic rays (CRs), which are dominated by protons; albedo gamma rays; and neutrons, which are generated by the interaction of CRs with the upper atmosphere. All background components are assumed to be isotropic. Earth-shielding effects are applied to CXB and cosmic-ray backgrounds. For albedo gamma rays and neutrons, the fluxes are normalised taking into account the average polarimeter pointing direction during a representative orbit. The energy spectra of these components are derived from the same methods used in the background simulation of HXMT (the Hard X-ray Modulation Telescope) [10], which operates in LEO at the same altitude of SPHiNX but a lower inclination of 43 • . The spectra of cosmic rays exhibit a strong dependence on geomagnetic latitude. Compared to HXMT, SPHiNX operates at a lower geomagnetic cut-off which primarily affects low-energy particle fluxes. The input spectra are shown in Figure 3, with the ten components considered for SPHiNX. In the relatively high magnetic latitude, the secondary electrons and positrons have the same energy spectrum [11]. SPHiNX will pass through the SAA-a region with an extremely high flux of trapped particles (mainly protons and electrons)-several times per day. During a passage through the SAA, activation of structural components can cause a delayed background due to the decay of radioactive isotopes. The spectra of trapped particles in the SAA are obtained from SPENVIS (ESA's SPace ENVironment Information System) 4 . Prediction from models AP-8 and AE-8 for the orbital parameters of SPHiNX are shown in Figure 4. The maximum galactic cosmic ray flux will arrive at the Earth during the solar minimum. SPHiNX is expected to be launched in 2021, which is approaching the solar maximum. Here, the conservative solar minimum case has been considered, corresponding to the extreme case that SPHiNX may encounter. While the flux of electrons is much higher than the protons, the energy range of the electrons, which varies from 40 keV to 7 MeV as shown in Figure 4b, is too small to activate materials. Protons, occupying a higher energy range up to 400 MeV, are the main source of the delayed background, as shown in Figure 4a. Physics Model for Particle Interactions A reference physics list offered by the Geant4 collaboration named the Shielding Physics List has been used in the simulation. This package includes all the physics processes needed for simulations in the space environment, which include electromagnetic physics, hadronic physics and radioactive decay physics. In order to correctly model the relatively low-energy interactions in the scintillators and treat polarisation properly, "G4EmLivermorePolarizedPhysics" has been used instead of the default "G4EmStandardPhysics" implementation. Data Analysis When tracking particles interacting with the sensitive detectors, all steps with energy deposit have been recorded together with the corresponding detector ID by Geant4 software. After the simulation of all the background components, the generated data are analysed. For SPHiNX, an event will be considered as a signal only if certain photosensor pulse height conditions are fulfilled. Applying the same conditions to the background simulated data yields the background event rate. These conditions are related to three parameters, the hit threshold (HT), the trigger threshold (TT) and the upper discriminator level (UD): 1. All hits must have an energy deposit exceeding the HT. 2. At least one hit has an energy deposit above the TT. 3. No hit has an energy deposit exceeding the UD. Here, a hit means an interaction in a scintillator, which comprises the sum of all internal energy deposits. A hit is seen only when it has an energy deposit above the HT, otherwise it would be indistinguishable from the electronic noise level and would thus not issue a trigger in the instrument. TT is needed to flag a valid event and UD is applied for suppressing background, mainly from cosmic ray minimum-ionising particles (MIPs). Based on the difference in light-yield between the plastic scintillator and GAGG scintillator, separate parameters have been applied for them, as shown in Table 2. All these parameters are related to the detector and electronic readout system, and are chosen to optimise the polarimetric sensitivity in the SPHiNX energy range. Prompt Background The prompt background levels for all simulated background components outside the SAA after applying selections are shown in Table 3. Here, the background count rate from each component has been separated into three categories: one-hit events (energy deposit in only one scintillator), two-hit events (energy deposit in two separate scintillators) and higher-multiplicity events (with interactions in three scintillators or more). Two-hit events are used to determine the polarisation, and one-hit events are foreseen to be used for spectroscopy and localisation of GRBs. Higher-multiplicity events may not be recorded due to onboard storage constraints. From Table 3, one-hit events are seen to dominate the total background at more than five times the two-hit events. The situation for GRBs is similar, with one-hit events dominating by percentages depending on the GRB energy spectrum. Only a fraction of one-hit events will be stored, because of limitations on the onboard storage, dead time and downlink. Higher-multiplicity events contribute little to the total rate and are ignored in what follows. The total prompt background of two-hit events is 323 counts/s, as shown in Table 3. The dominating component is CXB, with a contribution of 195 counts/s, which is ∼60% of the total background. This is common for large-FoV satellite-borne detectors. The second prominent component is albedo gamma rays, coming from cosmic rays interacting with the upper atmosphere. The flux of albedo gamma rays depends on the amount of atmosphere that the detector views [6]. Here, the average background from albedo gamma rays has been presented, which is about 35% of the total background. The remaining 5% comes from albedo neutrons and primary and secondary cosmic rays, which is negligible due to the shielding and thresholds applied, as discussed in Section 4.3. Delayed Background The delayed background is generated by the decay of radioactive isotopes, produced by trapped protons in the energy range from 100 MeV to 400 MeV [10]. Such decays can be recognised from the energy deposit time recorded by Geant4. While the timescale for prompt background is <1 µs, the delayed background has a much wider time distribution, extending to hundreds of days depending on the half-life of the radioactive isotope. The simulation shows that the delayed background increases rapidly during the first month, and slowly saturates after one-year operation in orbit, to ∼190 counts/s for two-hit events, as shown in Figure 5. The majority of the delayed background originates from the aluminium materials in the platform structure. The effect of this background may be reduced during final optimisation study of payload integration. Upper Discriminator Selection The prompt and delayed background levels presented here are derived using a UD of 600 keV, which is chosen as a trade-off between source and background rates. The UD setting affects particle backgrounds (like protons, electrons and neutrons) more strongly than the photon background. For example, the two-hit event rate from CXB increases by 0.7% if no UD is applied at all, while the primary proton rate increases by a factor of 22.5 in the absence of a UD selection, resulting in the two-hit event rate of ∼106 counts/s, which is comparable to the contribution from the second dominating background source, albedo gamma rays. As the UD selection only has limited influence on photons, for a GRB with Band spectral parameters [12], E peak = 200 keV, α = −1.0, β = −2.5, UD selection only affects 0.2% of the two-hit event rate in maximum, which is negligible compared to the particle background. Thus, the UD selection is very important for the data quality in terms of the signal-to-background ratio. This is of high importance for SPHiNX, where the downlink capacity from the payload will be limited. Hit Threshold The behaviour of the HT is opposite to that of the UD. When the hit threshold is increased, the CXB rate decreases dramatically, while the primary proton rate remains essentially unaffected, as shown in Figure 6, where the same hit threshold is applied for both plastic and GAGG. This is under the assumption that the dynamic range of the GAGG detector and readout can be extended down to 5 keV. The total background is dominated by the rate of the CXB, that is, a high hit threshold is favoured. However, a large fraction of the signal events from the GRBs generate hits in this low-energy region, meaning that a low hit threshold is required for the scintillators. The optimisation of these event-selection parameters is done in terms of the minimum detectable polarisation (MDP) [13] based on the Fermi/GBM catalogue of GRBs [14]. Summary The Geant4 toolkit has been used in the background study of SPHiNX, by constructing an instrument model, a space radiation model and applying particle interaction physics models. Data selections are based on the parameters listed in Table 2, which are optimised in terms of the MDP based on an independent simulation of GRBs from the Fermi/GBM catalogue. The simulation shows that the total background of two-hit events, including prompt and delayed background, is 513 counts/s when SHPiNX operates in orbit for one year. This is expected to be the most extreme case that SPHiNX will encounter. The prompt two-hit event rate from all considered background components outside the SAA amounts to ∼323 counts/s, as shown in Table 3. The dominant component is the CXB, contributing 60% of the total background. The second most prominent component, albedo gamma rays, whose flux varies with the pointing of the instrument, has an average contribution of 35%. Background from albedo neutrons and primary and secondary cosmic rays has a total contribution of 5%, which is negligible, due to the shielding and thresholds applied. A delayed background originating from radioactive isotope decay induced by SAA-trapped protons significantly increases the two-hit background. It increases rapidly during the first month, and slowly saturates at ∼190 counts/s after SPHiNX is in orbit for one year, as shown in Figure 5. Since the delayed background mainly originates from the aluminium platform structure, it is expected that a reduction will be possible once the polarimeter shielding is optimised. An independent simulation of GRBs from the Fermi/GBM catalogue, using the same thresholds as applied for the background study, shows that the polarisation of ∼50 GRBs with minimum detectable polarisation less than 30% will be measured during the two-year mission lifetime. The performance is sufficient to allow discrimination between GRB prompt emission arising from synchrotron processes in ordered and random magnetic fields, and inverse Compton-dominated outflows [14]. Author Contributions: F.X. presented the paper at the Workshop on behalf of the SPHiNX Collaboration. The manuscript was prepared by F.X. and M.P.
4,239.2
2018-04-24T00:00:00.000
[ "Physics", "Environmental Science" ]
Photochemical escape of atomic C and N on Mars during the X8.2 solar flare on 10 September 2017 Context. Characterizing the response of the upper Martian atmosphere to solar flares could provide important clues as to the climate evolution of the red planet in the early Solar System, when the extreme ultraviolet and soft X-ray radiation was substantially higher than the present-day level and when these events occurred more frequently. A critical process herein is the Martian atmospheric escape in the form of atomic C and N, as mainly driven by CO 2 /CO and N 2 dissociation. Aims. This study is devoted to evaluating how these escape rates varied on the dayside of Mars during the X8.2 solar flare on 10 September 2017. Methods. The background Martian atmospheric structures, before, during, and after the flare, are constructed from the Neutral Gas and Ion Mass Spectrometer measurements made on board the Mars Atmosphere and Volatile Evolution spacecraft, from which the hot C and N production rate profiles via different photon and photoelectron impact channels and on different flare stages are obtained. They are combined with the respective escape probability profiles computed using a test particle Monte Carlo approach to derive the atomic C and N escape rates on the dayside of Mars. Results. Our calculations indicate that the pre-flare C and N escape rates are (1 . 3–1 . 4) × 10 24 s − 1 over the dayside of Mars. During the event, we find a modest decrease in the C escape rate of 8% about 1 h after the flare peak, followed by a recovery to the pre-flare level several hours later. However, an opposite trend is found for the N escape rate during the same period, which shows an increase of 20% followed by a recovery to the pre-flare level. Conclusions. The distinction between C and N in terms of the variation in the escape rate during the solar flare reflects the competition between two flare-induced effects: enhanced hot atom production via dissociation and enhanced collisional hindrance due to atmospheric expansion. Introduction Many aspects of the upper Martian atmosphere are strongly influenced by solar extreme ultraviolet (EUV) and soft X-ray (SXR) radiation.Firstly, the atmospheric thermal structure is known to vary systematically with the incident solar irradiance, in terms of both the absolute temperature (Forbes et al. 2008;Jain et al. 2015) and heating efficiency (Gu et al. 2020b).Secondly, the ionized portion of the upper atmosphere, normally referred to as the ionosphere, is predominantly solar-driven, manifesting as an enhanced peak electron or ion density at high solar activities (Morgan et al. 2008;Huang et al. 2020).Thirdly, the atmospheric neutral escape rate also responds quite sensitively to solar radiation, owing to an enhanced dissociative recombination (DR) of O + 2 (for O escape) or enhanced CO and N 2 dissociation (for C and N escape) at high solar activities (Lillis et al. 2017;Cui et al. 2019). Along with the 11-yr solar cycle and 25-day solar rotation period, the solar EUV and SXR irradiance can be elevated by a large factor over the short duration of solar flares, from tens of minutes to hours.The response of the upper Martian atmosphere to these events is particularly interesting because it may provide important clues as to the climate evolution of Mars in the early Solar System when the EUV and SXR radiation was much higher than the present-day level and when the flare events occurred more frequently.Because of this, many efforts have been devoted to various aspects of such a response, either numerically or observationally. The response of the upper Martian atmosphere to solar flares was investigated by Thiemann et al. (2015) and Elrod et al. (2018), who revealed significant heating during flares and concomitant density enhancement driven by thermal expansion.Enhanced CO + 2 ultraviolet doublet and CO cameron band emission has also been observed during flares, especially at relatively low altitudes, where SXR photons deposit most of the energy (Jain et al. 2018).Meanwhile, the response also occurs when the relative O and CO 2 abundances vary, likely owing to flare-induced variations in CO 2 dissociation and concomitant O production (Thiemann et al. 2018;Cramer et al. 2020).Finally, the Martian ionospheric structure is known to vary substantially during flares, manifesting as an enhancement in both photoelectron intensity and cold plasma content (Mendillo et al. 2006;Xu et al. 2018). Solar-driven atmospheric escape has long been known to modulate the long-term evolution of the Martian climate.Mayyasi et al. (2018) show that the H thermal escape rate increased by a factor of 5 during the X8.2 solar flare on 10 September 2017, mainly through an increase in upper atmospheric temperature.For O photochemical escape via O + 2 DR, Lee et al. (2018) modestly by 20% during the same flare in response to the increase in solar EUV.The much larger change in solar SXR is less important as the SXR photons affect the O + 2 content below the ionospheric peak, where hot O atoms are less likely to escape due to frequent collisions with the ambient neutrals.Such a conclusion was later confirmed by Thiemann et al. (2018). Photochemical processes also contribute to substantial C and N escape on Mars due to CO and N 2 photolysis (Fox 1993;Fox & Bakalian 2001;Bakalian & Hartle 2006;Bakalian 2006;Cui et al. 2019).The more recent investigation by Lo et al. (2021), however, proposed CO 2 photolysis as a more important contributor to C escape based on the new cross section data of Lu et al. (2014).All the above processes clearly depend on the incident solar EUV and SXR irradiance, thus indicating a potential response of atomic C and N escape on Mars to solar flares.This study is devoted to the first evaluation of such a response.We describe our approach in Sect. 2 and the main results in Sect.3. We then present a discussion and concluding remarks in Sect. 4. Model description We focus on the well-studied X8.2 solar flare event that occurred on 10 September 2017, during which several aspects of the Martian upper atmospheric response have been thoroughly investigated, including the thermal structure, neutral and ionic composition, airglow emission, and atomic H and O escape (Elrod et al. 2018;Mayyasi et al. 2018;Thiemann et al. 2018;Jain et al. 2018;Lee et al. 2018;Cramer et al. 2020), with the aid of a multiinstrument data set on board the Mars Atmosphere and Volatile Evolution (MAVEN) spacecraft (Jakosky et al. 2015).This event has also been modeled in detail by Fang et al. (2019), revealing both ionospheric perturbations occurring mainly below 110 km driven by enhanced SXR radiation and neutral atmospheric perturbations occurring above 150 km driven by enhanced EUV radiation. The solar flare peaked at 16:24 hr UT.We display in Fig. 1 (top row), from left to right, the pre-, during-, and post-flare incident solar spectra at the top of the Martian atmosphere over the wavelength range 0.5-189.5 nm, as adapted from Thiemann et al. (2018).In particular, the pre-flare spectrum reflects the average state over a time duration of 3-12 h before the flare peak (including MAVEN orbits #5915,#5916,and #5917), the post-flare spectrum reflects the average state over a time duration of 5-10 h after the flare peak (including MAVEN orbits #5919 and #5920), and the during-flare spectrum is from orbit #5918, with periapsis occurring about 1 h after the flare peak.Compared to the preflare level, the during-flare solar flux is enhanced by a factor of 4.6 in SXR (integrated over 1-10 nm) and reduced by a factor of 1.9 in EUV (integrated over 10-100 nm), whereas the post-flare solar flux is comparable over most of the displayed wavelength range.The change in solar flux at longer wavelengths is negligible, and the solar zenith angle during these orbits remains nearly constant at 70 • . In our calculations, the background atmospheric structures, from pre-to post-flare, are adapted from the Neutral Gas and Ion Mass Spectrometer (NGIMS) measurements made during the same MAVEN orbits (Mahaffy et al. 2015).We considered CO 2 , O, N 2 , and CO for the purpose of this study.Here we used the NGIMS O densities directly rather than manually adjusting them by imposing a multiplicative factor of ∼1.5, as proposed by Fox et al. (2021).Also, as a normal procedure, outbound measurements were excluded to avoid possible contamination by physical adsorption/desorption or heterogeneous chemistry on the instrument antechamber walls.The full density profiles used in our calculations cover the altitude range 100-500 km and were constructed with the procedure described in Wu et al. (2020).Specifically, the NGIMS CO 2 and N 2 densities were used to establish the temperature profile and eddy diffusion coefficient profile based on the empirical formulism of Krasnopolsky (2002), which allows the density profiles of all species to be readily derived with the aid of the hydrostatic or diffusion equation. All density profiles adopted in our background atmosphere are displayed in Fig. 1 up to 400 km (middle row).Detailed comparisons between the pre-and during-flare conditions indicate that the exobase temperature increases significantly, from 170 K to 220 K, as a result of flare-induced heating, and the atmospheric densities also increase substantially as a result of flare-induced thermal expansion, by a factor of 2.8 for CO 2 , 1.9 for O, 2.2 for N 2 , and 1.9 for CO, all referred to a common altitude of 200 km.In addition, a clear change in the atmospheric composition is witnessed, in the form of an increased O/CO 2 ratio from 4.9 (pre-flare) to 9.4 (during-flare) as a result of flareinduced dissociation; these refer to a constant total density level of 10 7 cm −3 .These features are fully compatible with the established scenario of the Martian upper atmospheric response to solar flares (e.g., Elrod et al. 2018;Thiemann et al. 2018;Cramer et al. 2020).Lo et al. (2021) report a contribution of 15% from photoelectron impact dissociation to hot C production in the upper Martian atmosphere, motivating us to include similar processes in our calculations.For such a purpose, information on the photoelectron energy spectrum was required, which is luckily provided by the MAVEN Solar Wind Electron Analyzer (SWEA) measurements (Mitchell et al. 2016).However, due to the incomplete coverage of available SWEA data (near the flare peak), here we constructed a two-stream, quasi-steady-state kinetic model to derive the photoelectron energy spectra from 3 eV to 5 keV for all three stages.We simultaneously solved the upward and downward photoelectron fluxes as a function of the energy and altitude, ignoring the effects of the magnetic field for simplicity.A large number of electron-neutral collision channels, both elastic and inelastic, were included to accurately model the electron energy degradation process.Coulomb collisions, which only affect the low energy portion of the model photoelectron spectra and are irrelevant for impact dissociation, were ignored.In our calculations, we assumed a local energy degradation at the lower boundary, whereas we assumed a zero downward flux and a constant upward flux gradient at the upper boundary.The model results indicate, as expected, comparable pre-and postflare photoelectron spectra intensities but considerably enhanced during-flare photoelectron intensities at all altitudes and all energies.Examples are provided in Fig. 1 (bottom row), which compares the model photoelectron energy spectra in both the upward and downward directions at two representative altitudes, 150 km and 200 km, for all three stages.The enhancement in photoelectron flux is clearly visible, which is more evident at the high energy end. We were thus able to compute the hot C and N production rate profiles via both photon and photoelectron impact processes.To derive the escape rates, these production rate profiles should be combined with the respective escape probability profiles, which are numerically obtained by implementing a testparticle Monte Carlo model.For each hot atom, the escape probability is a function of the release altitude, as well as the magnitude and direction of the initial velocity.Our model is analogous to existing models of photochemical escape on Mars (e.g., A177, page 2 of 7 2018) over the wavelength range 0.5-189.5 nm.The pre-flare spectrum (gray) is superimposed on the during-flare spectrum (red) for comparison.Middle: From left to right, the pre-, during-, and post-flare Martian upper atmospheric structure in terms of the CO 2 (black), O (red), CO (orange), and N 2 (green) density profiles, all constructed with the aid of the MAVEN NGIMS measurements.Bottom: From left to right, the pre-, during-, and post-flare photoelectron energy spectra modeled with a two-stream kinetic approach at 150 km (blue) and 200 km (red), respectively, in both the upward (solid) and downward (dashed) directions.Note that the photoelectron spectra in the two directions are nearly indistinguishable at 150 km.Bakalian & Hartle 2006;Bakalian 2006;Fox & Hać 2009, 2014, 2018) and has been extensively used in our previous investigations of nonthermal escape on various Solar System objects (e.g., Gu et al. 2020aGu et al. , 2021)).Specifically, a spherically asymmetric atmosphere was adopted in all our calculations.The collisions between hot atoms and background neutrals were assumed to vary with the relative energy between the two colliding partners, with the relevant cross sections taken from Fox & Hać (2018) and the scattering angle distribution taken from Gacesa et al. (2020).All inelastic processes, such as collisional excitation, were ignored in our calculations.For each model run, a total number of 100 000 particles were released at any given altitude, with the distribution of the initial velocity direction assumed to be isotropic and the distribution of the initial energy obtained from the respective differential dissociation rate.In our calculations, dissociation cross sections for CO 2 , CO, and N 2 were adapted from Heays et al. (2017) and Gacesa et al. (2020) for photon impact processes and from A177, page 3 of 7 A&A 672, A177 (2023) Fig. 2. Hot C (blue) and N (red) production rate profiles in the upper Martian atmosphere for the pre-, during-, and post-flare stages, from left to right.The situations for photon and photoelectron impact processes are presented separately in the top and bottom rows.For hot C, we show the results for both CO 2 (solid) and CO (dashed) dissociation.Cui et al. (2011) and Lo et al. (2021) for photoelectron impact processes. Model results The hot atom production rate profiles are depicted in Fig. 2 for the pre-, during-, and post-flare stages, from left to right.The situations for photon and photoelectron impact processes are shown separately in the top and bottom rows, respectively.In the top row, different production channels are shown: CO and CO 2 photolysis, the two most important hot C production channels, and N 2 photolysis, the dominant hot N production channel.The roles of these channels have been well identified in previous studies (e.g., Fox 1993;Fox & Bakalian 2001;Bakalian & Hartle 2006;Bakalian 2006;Cui et al. 2019;Lo et al. 2021).In particular, CO 2 photolysis always produces more hot C atoms than CO photolysis near and below the exobase, whereas CO becomes more important at higher altitudes, in agreement with the recent finding of Lo et al. (2021).Over the displayed altitude range, the effects of photoelectron impact processes are found to be non-negligible, contributing to 30% of hot C production and 25% of hot N production for the pre-and post-flare stages, and contributing 40% of hot C production and 35% of hot N production for the during-flare stage.The results for C agree roughly with those of Lo et al. (2021).For CO dissociation, we separately evaluated two photolytic channels that produce ground state and excited state C atoms but did not distinguish between them for photoelectron impact dissociation due to the lack of relevant cross section data.It is also noteworthy that photoelectron impact dissociation tends to be more effective at low altitudes, in response to a more significant enhancement in solar SXR than in solar EUV. Interesting features can be seen by comparing different flare stages.Referring to a fixed altitude of 200 km and combining all channels, the hot C production rate increases from 0.7 cm −3 s −1 (pre-flare) to 2.5 cm −3 s −1 (during-flare) and then decreases back to 0.7 cm −3 s −1 (post-flare), where we have included the contributions from both CO and CO 2 dissociation.Similarly, the hot N production rate increases from 0.7 cm −3 s −1 (preflare) to 2.0 cm −3 s −1 (during-flare) and then decreases back to 0.6 cm −3 s −1 (post-flare).Figure 2 also clearly reveals that the flare-induced enhanced production of hot C and N becomes more prominent with increasing altitude.To be more quantitative, the flare-induced variation in hot N production above 200 km is characterized by a scale height change from 15 km (pre-flare) to 26 km (during-flare) and then back to 16 km (post-flare), whereas for hot C production, the same scale height changes from 13 km (pre-flare) to 20 km (during-flare) and then back to 14 km (postflare).These features are obviously driven by the flare-induced enhancement in both dissociation and heating, along with the concomitant thermal expansion of the whole upper atmosphere (e.g., Thiemann et al. 2015;Elrod et al. 2018;Cramer et al. 2020). In Fig. 3, we show the escape probability profiles for various photochemical channels.For each case, the escape probability rapidly increases with increasing altitude over a narrow altitude A177, page 4 of 7 Fig. 3. C (blue) and N (red) escape probability profiles in the upper Martian atmosphere for the pre-, during-, and post-flare stages, from left to right.The situations for photon and photoelectron impact processes are presented in the top and bottom rows, respectively.In the top row, we show the results for CO photolysis producing both ground (dashed) and excited (dash-dotted) state atoms, whereas in the bottom row, we do not distinguish between the two channels due to the lack of relevant cross section data. range more or less centered around 200 km, from essentially zero to a constant level at sufficiently high altitudes.Such a trend is in agreement with numerous previous results (e.g., Lee et al. 2018).Of particular interest is the high altitude asymptotic probability, which is greater than 50% certain circumstances as a result of the backward scattering of escaping atoms (Fox & Hać 2009, 2014), a feature that is not predicted by the idealized exobase approximation and can only be properly modeled with the Monte Carlo approach.escape probability profiles for several channels (CO photolysis producing excited state and CO 2 photolysis) are exceptionally low compared to the other two.This is because a significant portion of the released hot C atoms from these channels have nascent energies below the local escape energy owing to their large dissociation thresholds.When compared to both the and post-flare stages, escape probability profiles near the peak appear to be shifted upward by several kilometers, implying that hot atoms tend to be more seriously hindered by collisions with ambient neutrals as a consequence of the thermal expansion of the whole upper atmosphere. Combing the numerical results from the hot atom production rate and escape probability (denoted as P j and ϵ j for channel j), we were able to easily compute the respective escape rate (denoted as L j ) via where R M is the solid body radius of Mars and z is the altitude.The computed escape rates over the dayside of Mars are listed in Table 1 for reference.Under all circumstances, the contribution from photoelectron impact dissociation to escape is much smaller than the respective contribution to hot atom production (25-40%), because hot atoms released from such a process are preferentially located at low altitudes (see above) and hence less likely to escape due to more frequent collisions with ambient neutrals.This accounts for the fact that the total C and N escape rates are mostly driven by direct photolysis.For C escape, the contribution from CO 2 photolysis is significantly reduced near the flare peak, whereas the contribution from CO photolysis is instead enhanced (see below).Because the former is a more important channel driving C escape, we reached the conclusion that the effect of the solar flare is to reduce the total C escape rate by 8%.For N escape, however, an opposite trend is predicted by our calculations, with the total N escape rate enhanced by a factor of nearly 20% near the flare peak.As expected, the escape rates for both species fall back to their initial levels during the post-flare stage when the solar EUV and SXR irradiance recovers to the pre-flare level. Obviously, the response of the Martian photochemical escape to solar flares is the combined result of two competing effects.On the one hand, enhanced solar EUV and SXR irradiance causes enhanced photon and photoelectron impact dissociation, which naturally produces more candidate escaping A177, page 5 of 7 A&A 672, A177 (2023) Table 1.Dayside atomic C and N escape rates on Mars in units of s −1 for the pre-, during-, and post-flare stages and for various photochemical channels. Species Channel Notes.hν denotes a photon, and e denotes a photoelectron. atoms.On the other hand, enhanced solar irradiance also significantly heats the upper atmosphere and causes its expansion, implying more frequent collisions with ambient neutrals that hinder escape.For hot C, the latter effect is more important than the former, thus leading to an overall weaker flare-induced C escape.For hot N, it appears that the latter becomes less important, that is to say, the effect of enhanced collisional hindrance is insufficient to offset the effect of enhanced hot N production. Discussion and concluding remarks In this study we examine the variation in atomic C and N escape on Mars during the X8.2 solar flare on 10 September 2017.A modest decrease in the C escape rate of 8% is observed about 1 h after the flare peak, which is followed by a recovery to the pre-flare level several hours later.An opposite trend is observed for the N escape rate, which shows a fairly significant increase of 20% followed by a recovery to the pre-flare level.This distinction reflects the competition between two flare-induced effects: enhanced hot atom production via photolysis (along with concomitant photoelectron impact dissociation) and enhanced collisional hindrance due to atmospheric expansion.The results reported here can be favorably compared to several existing investigations, which we discuss in turn below.Firstly, during the entire course of the flare event, our derived C escape rate remained at the level of 10 s −1 , in good agreement with the recent result of Lo et al. (2021) and within the range reported earlier in Gröller et al. (2014) despite the fact that the latter included neither CO 2 photolysis nor photoelectron impact dissociation process in their calculations.The atomic N escape rate derived here is also at the of 10 24 s −1 , consistent with the high solar activity value of Bakalian & Hartle (2006). Secondly, variations in the C and N escape rates with solar irradiance were recently investigated by Cui et al. (2019), which we find to be fully with the flare-induced variations here.Both studies suggest that N escape rate increases when the solar irradiance is elevated.A comparison of the C escape rate deserves some caution.At face value, Cui et al. (2019) reported a positive correlation for C escape with solar irradiance, whereas in this study, an opposite correlation is inferred.However, this does not necessarily indicate any conflict because Cui et al. (2019) considered CO dissociation only, and in our calculations we included both CO 2 and CO dissociation. In fact, Table 1 does suggest that the portion of the C escape rate driven by CO 2 dissociation is reduced during the flare, whereas the portion driven by CO dissociation is enhanced instead.We also note that photoelectron impact dissociation is not included in the calculations of Cui et al. (2019), which contributes to a non-negligible fraction of total C and N escape. Thirdly, a comparison with atomic O escape provides some interesting insights into the response of the upper Martian atmosphere to solar flares.Such a response has been modeled in detail by Lee et al. (2018), who reported an enhancement of 20% in the O escape rate right at the flare peak followed by a rapid reduction of 13% about 2.5 h later.It thus appears that O escape responds quite instantaneously during the flare event, which could be understood in terms of a nearly instantaneous ionospheric response to the change in solar irradiance.Clearly, the flare-induced variations in atomic C and N escape should be different because they are driven by the delayed neutral atmospheric response.Due to the scarcity of available data, numerical calculations are required to distinguish between the flare-induced variations of different escape processes, which we defer to a follow-up investigation. Fig.1.Top: From left to right, the pre-, during-, and post-flare solar EUV and SXR spectra adapted fromThiemann et al. (2018) over the wavelength range 0.5-189.5 nm.The pre-flare spectrum (gray) is superimposed on the during-flare spectrum (red) for comparison.Middle: From left to right, the pre-, during-, and post-flare Martian upper atmospheric structure in terms of the CO 2 (black), O (red), CO (orange), and N 2 (green) density profiles, all constructed with the aid of the MAVEN NGIMS measurements.Bottom: From left to right, the pre-, during-, and post-flare photoelectron energy spectra modeled with a two-stream kinetic approach at 150 km (blue) and 200 km (red), respectively, in both the upward (solid) and downward (dashed) directions.Note that the photoelectron spectra in the two directions are nearly indistinguishable at 150 km. estimate that the O escape rate increased A177, page 1 of 7 Open Access article, published by EDP Sciences, under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0),which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.This article is published in open access under the Subscribe to Open model.Subscribe to A&A to support open access publication.
5,941.6
2022-12-20T00:00:00.000
[ "Physics", "Environmental Science" ]
The Role of ArtificiaI Intelligence in Brain Tumor Diagnosis: An Evaluation of a Machine Learning Model This research study explores of the effectiveness of a machine learning image classification model in the accurate identification of various types of brain tumors. The types of tumors under consideration in this study are gliomas, meningiomas, and pituitary tumors. These are some of the most common types of brain tumors and pose significant challenges in terms of accurate diagnosis and treatment. The machine learning model that is the focus of this study is built on the Google Teachable Machine platform (Alphabet Inc., Mountain View, CA). The Google Teachable Machine is a machine learning image classification platform that is built from Tensorflow, a popular open-source platform for machine learning. The Google Teachable Machine model was specifically evaluated for its ability to differentiate between normal brains and the aforementioned types of tumors in MRI images. MRI images are a common tool in the diagnosis of brain tumors, but the challenge lies in the accurate classification of the tumors. This is where the machine learning model comes into play. The model is trained to recognize patterns in the MRI images that correspond to the different types of tumors. The performance of the machine learning model was assessed using several metrics. These include precision, recall, and F1 score. These metrics were generated from a confusion matrix analysis and performance graphs. A confusion matrix is a table that is often used to describe the performance of a classification model. Precision is a measure of the model's ability to correctly identify positive instances among all instances it identified as positive. Recall, on the other hand, measures the model's ability to correctly identify positive instances among all actual positive instances. The F1 score is a measure that combines precision and recall providing a single metric for model performance. The results of the study were promising. The Google Teachable Machine model demonstrated high performance, with accuracy, precision, recall, and F1 scores ranging between 0.84 and 1.00. This suggests that the model is highly effective in accurately classifying the different types of brain tumors. This study provides insights into the potential of machine learning models in the accurate classification of brain tumors. The findings of this study lay the groundwork for further research in this area and have implications for the diagnosis and treatment of brain tumors. The study also highlights the potential of machine learning in enhancing the field of medical imaging and diagnosis. With the increasing complexity and volume of medical data, machine learning models like the one evaluated in this study could play a crucial role in improving the accuracy and efficiency of diagnoses. Furthermore, the study underscores the importance of continued research and development in this field to further refine these models and overcome any potential limitations or challenges. Overall, the study contributes to the field of medical imaging and machine learning and sets the stage for future research and advancements in this area. Introduction Brain tumors are abnormal growths of cells within the skull, which can be either primary, originating from brain cells, or secondary, resulting from cancer spread (metastasis) from other body parts.The early detection of brain tumors is crucial for improving patient outcomes, as it can significantly reduce morbidity and mortality rates [1].Among the various types of brain tumors, gliomas, meningiomas, and pituitary tumors are particularly notable due to their prevalence and the challenges they present in diagnosis and treatment.Current diagnoses of brain tumors involve the use of different imaging modalities, including magnetic resonance imaging (MRI) and computed tomography (CT) scans.MRI provides key information on the anatomical structure of human tissue via soft tissue contrast, making it essential for the identification of the brain tumor's size, shape, and location [2]. However, the challenge lies in the ability of MRIs to properly classify tumors.Conventional MRI classification and grading of gliomas, a specific type of brain tumor, ranges from 55.1% to 88.3% in accuracy and has even demonstrated a 50% false-positive rate [3].This holds tremendous clinical significance as early accurate detection of brain tumors is associated with improved treatment and survival [4].The reason behind this is its difficulty in segmentation [5,6].MRIs segment a tumor in several visualized images in order to view multiple features of the tumor.Some tumors, such as meningiomas, are easily segmented; however, gliomas are difficult to localize and are often diffuse and poorly contrasted, thus posing a challenge in segmentation. The modeling of the segments into data-based models also plays a role in the difficulty of proper tumor classification.The 3D data-based model provides more contextual information; however, they require manual segmentation thus taking significantly more time when scanning, and are still susceptible to inaccuracies.The incorporation of machine learning could pose a possible solution.Machine learning uses neural networks from minimal inputted data to solve scenarios and is beneficial in medical imaging via pattern recognition and identification [7].The model in this study aims to combat the aforementioned issues as it utilizes machine learning to accurately classify several brain tumors, therefore improving the number of missed diagnoses of brain tumors. Several advanced MRI classification models, such as dynamic contrast-enhanced (DCE) and magnetic resonance spectroscopy (MRS), are in use today to improve the precision of tumor detection [8,9].DCE allows for the quantitative evaluation of intravascular contrast diffusion within the interstitium, a frequent phenomenon in brain tumors.However, the intricacies of its acquisition and analysis methods restrict its clinical application and accessibility [10].MRS, which incorporates molecular abnormalities, provides valuable insights into brain tumors.Yet, its clinical application is rare due to its lengthy process, variability across different imaging locations, and the necessity for a technologist or radiologist's assistance [11]. Recent advancements in machine learning have led to the development of sophisticated image classification models that can accurately identify different types of brain tumors.Convolutional neural networks (CNNs) are at the forefront of these developments, with architectures designed to process and analyze MRI images for tumor detection [12][13][14].A CNN model with four convolutional layers, ReLU activation functions, dropout layers, and max-pooling layers was proposed, achieving an accuracy of 97.39% and an average F1-Score of 96.11% in one test [12].The model used a 10-fold cross-validation method, which is a technique to evaluate the model's performance by partitioning the data into subsets and using each in turn for testing.The image input size for the CNN was set to 256 × 256 pixels, and the classification output was divided into three classes corresponding to meningioma, glioma, and pituitary tumor. In addition to CNNs, other deep-learning methods and machine-learning techniques have been explored for brain tumor detection.For instance, a study proposed two deep learning methods and several machine learning approaches, achieving training accuracies of 96.47% and 95.63% for the 2D CNN and auto-encoder network, respectively [1].The areas under the ROC curve for both networks were impressively high, at 0.99 or 1, indicating excellent classification performance.Transfer learning has also been employed to enhance classification accuracy.By using pre-trained models such as EfficientNets and MobileNetv3, researchers have been able to achieve significant performance improvements.For example, EfficientNetB2 yielded an overall test accuracy of 99.06%, while MobileNetv3 achieved the highest accuracy of 99.75% [13,14].Transfer learning is particularly beneficial when dealing with limited labeled medical data, as it allows the use of knowledge acquired from extensive benchmark datasets like ImageNet [13,14]. Hybrid methods that combine CNNs with other algorithms like support vector machines (SVM) or artificial neural networks (ANN) have been developed to extract deep feature maps with high accuracy [15].These hybrid models utilize both deep features from CNNs and handcrafted features to produce highly efficient features for distinguishing between types of brain tumors [15].Comparative studies have shown that different architectures can yield varying levels of accuracy, sensitivity, specificity, and F1 scores.For instance, AlexNet, VGG16, and ResNet-50 have shown accuracies ranging from 95.60% to 97.66%, with a hybrid model of VGG16 and ResNet 50 reaching an accuracy of nearly 100% [16]. This study examines the application of machine learning, specifically image classification models, in the accurate identification of these brain tumor types using MRI images [12,16].Specifically, the model under consideration in this study addresses the aforementioned challenges by requiring only internet access, thereby providing wide accessibility and showing promising results in the accuracy of brain tumor detection. Materials And Methods The machine-learning model used in this study was trained using the "Brain Tumor MRI Dataset," an opensource online database of brain MRI images curated by Msoud Nickparvar via Kaggle [17].This dataset amalgamates three distinct datasets: figshare, SARTAJ, and BR35H, comprising a total of 7,023 images.For the purposes of this research, a subset of 2,000 images was selected, encompassing brain MRIs of gliomas (500 images), meningiomas (500 images), pituitary tumors (500 images), and normal scans without tumors (500 images). The image classification model was constructed using Google Teachable Machine (Alphabet Inc., Mountain View, CA), a widely used online platform facilitating the testing, training, and deployment of machine learning classification models [18].The training process encompassed 80 epochs with a batch size of 32 and a learning rate set at 0.0001.Each image was adjusted to ensure a 1:1 aspect ratio.The primary objective during training was to evaluate Google Teachable Machine's (GTM) efficacy in discerning between gliomas, meningiomas, and normal scans devoid of tumors.While GTM provides a confusion matrix and accuracy metrics, additional parameters such as precision, recall, and F1 score were derived through statistical analyses using the generated plots and accuracy data to enhance the quantification of GTM's performance in distinguishing between various types of brain scans with tumors. Results The performance of the AI model in classifying brain images across different tumor categories was evaluated using several metrics, as shown in Table 1.The table summarizes the performance metrics of a classification model used to classify four different classes: "normal" (no tumor), "glioma," "meningioma," and "pituitary."Each class is evaluated based on four metrics: accuracy, precision, recall, and F1 score.For normal cases, the model exhibited an accuracy and precision of 0.96, a recall of 0.92, and an F1 score of 0.94, indicating high accuracy and precision with a few false negatives.In the case of glioma, the model demonstrated perfect accuracy and precision (1.00), with a slightly lower recall of 0.94 and an F1 score of 0.97, suggesting a few false negatives.For meningioma cases, the model had an accuracy and precision of 0.84, a high recall of 0.97, and an F1 score of 0.90, implying good identification of true positives but a higher rate of false positives.For pituitary cases, the model performed highly with an accuracy and precision of 0.97, a recall of 0.94, and an F1 score of 0.95.In the evaluation of the diagnostic model, the rates of false negatives were observed to be 8% for the normal class, 6% for both the glioma and pituitary classes, and 3% for the meningioma class.Furthermore, the analysis of false positive rates, derived from the precision values, revealed that the normal and pituitary classes had a rate of 4% and 3% respectively, while the glioma class demonstrated no false positives.However, the meningioma class exhibited a higher false positive rate of 16%.Overall, the model performs well across all classes, with the highest performance observed in the glioma class.These metrics provide insights into the model's performance.For instance, an accuracy of 0.96 for the "no tumor" class implies that 96% of the images without tumors were correctly classified.A precision score of 0.96 for the "no tumor" class indicates that 96% of the images predicted as "no tumor" were indeed without tumors.A recall of 0.92 for the "no tumor" class suggests that 92% of the actual images without tumors were correctly identified.An F1 score of 0.94 for the "no tumor" class reflects a strong balance between precision and recall for this category. Class The confusion matrix data (Figure 1) further confirms these metrics.The off-diagonal values, ranging from 0 to 8%, suggest some misclassifications occur, but overall, error rates are low.This suggests the model is The accuracy plot generated from GTM, shown in Figure 2, reveals a good level of model performance.The training curve shows a rapid ascent to high accuracy, hitting the 0.8 mark right at the outset and achieving perfection with a 1.0 accuracy from the 30th epoch onwards.The test accuracy curve also starts strong, similarly breaching the 0.8 threshold early in training, yet rather than reaching perfection, it levels off to a value just above 0.9.This still represents a very high accuracy and indicates that the model generalizes well to unseen data, albeit with a slight performance drop compared to the training data.Despite the plateau, a test loss value of just under 0.2 is relatively low and indicates that the model has a strong predictive capacity on unseen data.It suggests that while the model may be fine-tuned to the training examples, it is likely to perform well in practice, potentially assisting clinicians in making informed decisions based on its predictions.However, it is crucial to validate these observations with additional realworld data to ensure the model's practical applicability and to confirm that it can maintain a low error rate in diverse clinical scenarios. Discussion Our group has a history of exploring the applications of machine learning in various healthcare and medical contexts, ranging from patient safety enhancement through integrated sensor technology [19], to the prediction of coronary artery disease [20], cardiovascular health management in diabetic patients [21], and image detection of colonic polyps [22].We have also contributed to the academic discourse on the transformative potential of AI in healthcare, navigating the ethical landscape, and public perspectives through our review paper [23].Furthermore, our entry paper on predictive modeling in medicine underscores our commitment to harnessing the power of machine learning for improved healthcare outcomes [24].The current study, focusing on the classification of brain tumors using a machine learning model built on the GTM platform, is a natural extension of our previous work.It leverages the advancements in machine learning, particularly in the realm of medical imaging, to develop a model that can accurately identify different types of brain tumors.The promising results of this study, coupled with our previous research, underscore the potential of machine learning as a powerful tool in healthcare, capable of enhancing diagnostic accuracy and ultimately improving patient outcomes. The primary objective of this investigation was to develop an image classification model using machine learning methodologies, specifically applied to a comprehensive brain MRI dataset.The accurate distinction of tumor pathologies from non-tumor mimickers such as bacterial abscesses, toxoplasma, and tuberculomas is of paramount importance.Misinterpretations can lead to erroneous diagnoses, such as mistaking an abscess for a glioma, which can significantly impact the treatment plan and prognosis.This underscores the need for advanced diagnostic tools that can accurately differentiate between these conditions.Machine learning models, trained on comprehensive datasets encompassing a wide range of pathologies, could potentially address this challenge.These models could learn the subtle differences in imaging characteristics between tumor and non-tumor conditions, thereby enhancing diagnostic accuracy.However, the effectiveness of such models would be contingent on the quality and diversity of the training data, necessitating the inclusion of a broad spectrum of both tumor and non-tumor pathologies.Future research should focus on developing and validating such models, with the ultimate aim of aiding radiologists in making more accurate diagnoses. Conclusions This study developed and evaluated an image classification model using machine learning techniques applied to brain MRI datasets.The model demonstrated promising accuracy, precision, recall, and F1 scores across various tumor categories, indicating its potential utility in clinical settings for tumor classification tasks.However, the study also identified inherent limitations in the model, including dataset bias, overfitting tendencies, and the absence of external validation datasets.These limitations highlight the need for further refinement and validation efforts.To address these issues, future work should focus on incorporating diverse datasets, implementing regularization techniques, and validating the model in larger cohorts and real-world scenarios.These steps will be crucial for enhancing the model's reliability and applicability in clinical practice. Despite these challenges, the findings of this study lay a solid foundation for continued advancements in AI-driven diagnostic tools for brain tumor classification.This work ultimately contributes to improved patient care and outcomes in neuroimaging, paving the way for future research and development in this critical area of healthcare.This study underscores the potential of machine learning in enhancing diagnostic accuracy and efficiency in neuroimaging, and it serves as a stepping stone towards the broader application of AI in healthcare. An epoch, indicating the number of times each image traverses the training model, and batch size, denoting the number of images used per training iteration, were carefully considered.With a total of 63 batches given 2,000 images and a batch size of 32, each epoch concludes once all batches have been processed.GTM autonomously partitions its dataset into training and test samples, allocating 85% of the images for training (425 images per class) and 15% for testing (75 images per class), a split unmodifiable by the user. FIGURE 1 : FIGURE 1: A 4x4 confusion matrix showing classification accuracy for four different conditions with true positive rates as follows: 'normal' at 96% accuracy, 'glioma' at 100% accuracy, 'meningioma' at 84% accuracy, and 'pituitary' at 97% accuracy, with low misclassification between conditions. FIGURE 2 : FIGURE 2: The GTM-generated accuracy plot shows rapid learning with the training accuracy quickly reaching perfect scores by the 30th epoch.The test accuracy also climbs swiftly, indicating good model generalization, and stabilizes at a high value just above 0.9, suggesting robust predictive performance on unseen data. FIGURE 3 : FIGURE 3: The GTM loss plot displays a learning progression with the training loss decreasing from an initial value above 1.0 to 0.0 by 80 epochs, illustrating the model's improving accuracy on training data.The test loss follows a similar decline but levels off just under 0.2, indicating a good generalization to unseen data with a small, stable margin of error. TABLE 1 : Performance metrics of the classification model used to classify four different classes: normal, glioma, meningioma, and pituitary. Each class is evaluated based on four metrics: accuracy, precision, recall, and F1 score. [25,26]el was trained using GTM's standard image classification model, which facilitated the training process.The outcomes of the image classification model showcased promising levels of accuracy, precision, recall, and F1 score.The "no tumor" class achieved an accuracy of 0.96, indicating the correct classification of 96% of tumor-free images by the model.The glioma class achieved a perfect accuracy score of 1.00, affirming the model's adept identification of all glioma images.In comparison, the meningioma class achieved an accuracy of 0.84, while the pituitary tumor class achieved an accuracy of 0.97.The mean accuracy across all classes was 0.94, underscoring the model's overall efficacy.Observations from accuracy and loss plots revealed a positive correlation between increased epochs and enhanced training and test accuracy.However, with prolonged epochs, a divergence in loss between the training and test splits emerged, indicating potential overfitting.This phenomenon suggests the model's excessive specialization on training data, compromising its adaptability to unseen data.The model presented has a few drawbacks.Its capacity to generalize across different patient populations or imaging methods may be limited or biased due to its reliance on a single dataset, no matter how large.The robustness of the model and its application to actual clinical settings are hampered by the lack of external validation datasets.To improve the model's capacity for generalization, more varied datasets should be included.The observed overfitting tendencies highlight the necessity for additional research into regularization strategies.Even though the model showed encouraging performance measures, more validation in larger cohorts and real-world settings is needed to determine the model's dependability and effectiveness in realistic tumor classification tasks before it can be applied clinically.These findings underscore the effectiveness of the AI model in accurately categorizing brain images across various tumor types.Despite its remarkable precision and accuracy, particularly evident in the glioma class, the model displayed comparatively diminished recall in the meningioma class.While F1 scores highlighted a harmonious balance between precision and recall across most tumor categories, concerns regarding potential overfitting surfaced from observed loss plots.Addressing this challenge warrants further investigation and the adoption of techniques like regularization or early stopping to enhance model robustness.Regularization techniques involve shrinking model coefficients to minimize loss, thus curbing overfitting without compromising training accuracy[25,26].Additionally, augmenting training data with diverse edge cases could help mitigate overfitting and bolster the model's generalization capabilities.Continued research and refinement efforts are imperative to mitigate overfitting concerns and elevate the model's overall performance and reliability in clinical tumor classification applications.
4,720.8
2024-06-01T00:00:00.000
[ "Medicine", "Computer Science" ]
Advances in the development of a cognitive user interface In this contribution, we want to summarize recent development steps of the embedded cognitive user interface UCUI, which enables a user-adaptive scenario in human-machine or even human-robot interactions by considering sophisticated cognitive and semantic modelling. The interface prototype is developed by different German institutes and companies with their steering teams at Fraunhofer IKTS and Brandenburg University of Technology. The interface prototype is able to communicate with users via speech and gesture recognition, speech synthesis and a touch display. The device includes an autarkic semantic processing and beyond a cognitive behavior control, which supports an intuitive interaction to control different kinds of electronic devices, e. g. in a smart home environment or in interactive respectively collaborative robotics. Contrary to available speech assistance systems such as Amazon Echo or Google Home, the introduced cognitive user interface UCUI ensures the user privacy by processing all necessary information without any network access of the interface device. Research background The joint research project Universal Cognitive User Interface (UCUI) 2015-2018 is developing methods, data and a prototype [1] to easily manage connected home appliances (as example scenario) by corresponding intuitive actions of the users. In the framework of our contribution, some preliminary results from the UCUI project shall demonstrate the potential for a novel class of interfaces for the humanmachine or human-robot interaction. With UCUI, the user can control the system via speech, gestures or even a virtual keyboard. The system is designed to operate autonomously, neither using an extensive database nor a network connection. Recent speech dialog systems and cognitive user interfaces allow a verbal, natural human-machine-interaction and achieve an excellent performance. However, the leading commercial solutions heavily rely on transmitting sensitive user information such as personal data or voice recordings through public networks and on processing, storing and analyzing the data on servers of external service providers. The UCUI demonstrator is realizing a cognitive user interface for intuitive interaction with arbitrary electronic devices and ensures privacy by design. The system collects user-specific data which are processed by a cognitive behavior control to allow an adaptation to the users' communication style and to improve the strategy in problem solving. The underlying paradigm requests a systems' adaptation to the user and not vice versa, assuming the fact, that such systems are mainly used by human beings who are less trained in the use of complex technical devices. In addition, user-specific data are not delivered to other users to avoid possible conclusions from these data. In order to achieve an appropriate system behavior, a variety of possible human-machine interactions needs to be integrated into the UCUI system, since alternative input phrases may have an identical meaning in speech control. Therefore, all input and output modalities are fused on a semantic processing level. For the data preparation we conducted Wizard-of-Oz (WoZ) experiments to collect typical user inputs [2]. In further steps, the user behavior shall be analyzed and integrated into the system model. The analysis and classification software in the system is based on the Unified Approach to Signal Synthesis and Recognition [3,4], hosted by the BTU Cottbus-Senftenberg and Fraunhofer IKTS including a speech recognizer and synthesis engine, both ported to the hardware. The project partners are mainly focusing on the cognitive processing of meanings and knowledge about the user habits. For the representation of semantic data featurevalues-relations [5][6][7][8] are used, processed by Petri net transducers (PNT) [9,10]. Feature-values-relations are treelike non-sequential structures, where a feature has a set of values which themselves can be features again. Petri net transducers are used to translate input signals into such structures and also for translating them into output signals. The system shall be capable of learning from the behavior of users in order to improve its function. Multiple devices will be able to cooperate (distributed microphone array, task assignment, etc.) over a strongly encrypted wireless connection. The system design is based on studies of user-machine interactions in a real home-automation scenario and takes into account relevant legal and ethical aspects. For the demonstrator, the project partners reduced the task to the domain of controlling a heating installation. The semantic processing transforms all inputs (speech, gestures and touch screen) into a unified representation. By the cognitive behavior control, the representation can be transformed into any output channel (speech, acoustic signals, display). Figure 1 shows the first version of the UCUI demonstrator. Data retrieval by Wizard-of-Oz experiments The described cognitive interface is developed userdriven, which poses a challenge, as the overall system is still under construction. The project partners need to evaluate and to optimize some system functions before their implementation. For this purpose, the Wizard-of-Oz (WoZ) method is used in the UCUI project [2]. The main component in Wizard-of-Oz experiments is a human being (the wizard), who simulates the final systems' behavior. During the experiment, the test user interacts with the interface of a simulated technical system. All system reactions to the user are pretended by the wizard. Wizards have to react accurately and in short time on user inputs. This can be supported with predefined, frequent responses in rapid access, e.g. "please wait, your inquiry is processed" or similar statements, and by a suitable training of wizards to achieve constantly accurate responses. User scenarios and tasks require a known goal of the actions which can be only achieved by the means of the system without restricting the user in his solution strategy, verbal utterances or gestures. The task construction has to consider the interaction variety of the user and should communicate the options to the user. Within the UCUI tests, the user is receiving written instructions beforehand, and the interface system is demonstrated on the basis of a simple vendor machine application by an investigator (not identical with the wizard). Furthermore, the task assignment is based on hypotheses with regard to the expected user and system behavior. Finally, a successful, user-driven system construction includes a series of WoZ experiments, whereby the tested system states should increasingly interact in autonomous mode with the user, i.e. less-controlled by the wizard. Consequently, the UCUI project involves three consecutive test runs, followed by the overall evaluation of the optimized system. A Wizard-of-Oz Framework (WoOF) was built to support the user-driven construction. It allows for the creation of different evolving simulators and serves as an execution environment for these simulators. The formulated requirements include general ones on frameworks supporting WoZ experiments as well as special ones following from the project specifications. Since the task is to simulate a real system which gives visual and audible feedback to the user, there has to be some mechanism to present visual objects on a monitor and to route audio data to the user. Eventually the system should be controllable via speech and touch-input (among other inputs) which imposes the necessity to interact with the visual objects and to route audio data from the user to the framework. Besides these basic functionalities, an adequate support for the realization of the experiments has to be included. This covers creation of simulators and experiments as well as supporting the wizard during the experiments. For the UCUI project the first experiments [2] consisted of a series of scenarios which are seen as tasks to the participants. The wizard was able to switch between scenarios. A single scenario is understood as a unit of a user task, the aim of the task, and possible visual and audible feedback. The outcome of the experiments included a collection of user behaviors. Therefore all interactions with the system were recorded which included the monitor content the participants saw, all touch-events they triggered on it and all spoken input during the experiments. To support the integration of the gesture control in following project phases, the participants were additionally recorded by camera. To ease the evaluation of the recorded data the audio output of the system was recorded as well. A form of session management to allow for data per participant was also included. To respect the privacy of the participants was explicitly not a requirement on the framework. This had to be assured by the experimenters. The motivation behind was that there cannot be any algorithmic solution appropriate to all applications of the framework. So this burden was left to the users. A session as the execution of an experiment with a distinct participant included some well-defined events. Also it was possible to add new types of events to the framework. The preparation of the collected data was semiautomated. All data of a session were cut into pieces corresponding to the scenarios. Audio data were transliterated and phonetically transcribed. All steps were advised by a human being. The construction of the feature-values-relations and Petri net transducers (cf. next section) is an ongoing research. Figure 2 shows the WoOF graphical interface as seen by the Wizard, mainly on the left side. We replaced the original smart home task in UCUI (the heating control feedback) on the right side by the planned interactive cooperative robot task to collect fruits and to hand them over to a user, suggested by the AI/robotic company 7Bot in [11]. In the future humanrobotic experiments, we will use such a low cost platform (in this case ca. 350 USD) with open source software interface to collect our user inputs. Of course, for the WoZ experiments, the robotic arms will be manually controlled by the wizard. This will include actions to surprise users in order to collect more realistic cognitive user data in such a cooperative control task. Employment of image schemas The paradigm shift from mainly technology-centered devices -as described in introduction and preliminary WoZ experiments -towards human-centered devices requires the implementation of basic human models for a variety of human experiences. In [12], we reported amongst others on an experimental investigation of image schemas as basic features of human knowledge that could help to support the development of more intuitive interfaces that should enable effective interactions with a system based on the subconscious application of basic prior knowledge according to Mohs et al. [13] and Turner [14]. One of such a fundamental aspect of human experience, and the focus of this investigation, was the impact of image schemas on human knowledge and language. Image schemas (e.g. up-down, centerperiphery) are basic pre-conceptual, universal patterns of human experience that integrate information from multiple generic and perceptual modalities such as visual, acoustic and haptic information, i.a. suggested by Lakoff and Johnson [15]. They serve to structure human knowledge, behavior and experience. As basic building blocks of human knowledge, generated in earliest childhood interactions, image schemas may be available to all potential users. While previous research seems to support this hypothesis (e.g. Hurtienne et al. [16]), surveys on concept formation suggest that some image schemas occur earlier in infancy compared to other image schemas. Mandler and Pagán Cánovas [17] considered path schemas such as up-down, container, location, blockage, into, out of, open as basic image schemas occurring early in infancy at preverbal stages. On the contrary, center-periphery, scale, balance, cycle and other process schemas, near-far, multiplicity/unity as well as attributional image schemas (e.g. big-small, warm-cold) are built upon these basic primitives and, thus, should exert less influence on the development of human thought and knowledge [17]. To investigate whether developmental occurrence of image schemas influences their appliance in human speech interaction with machines/computers, we applied the WoZ paradigm and tested two hypotheses in [12]: First, we expected early, basic image schemas to be employed more often than later image schemas. Second, we expected no impact of individual difference variables on the frequency of applied image schemas, since these basic building blocks of human knowledge should be equally available to all users regardless of age, gender and technical experience. Forty-three German native speakers (20 men, 23 women; mean age 29.2 years, SD 9.9 years) participated in the speech interaction study. To calculate technical experience (TE), participants were asked to indicate the frequency with which they used a variety of technical devices at home (e.g. smartphone, laptop) and in public (e.g. ticketing machine, self-service banking). For seventeen items, participants had to select response options from 0 (I don't know this device) via 5 (almost daily) to 7 (more than once per day). The mean of all responses was then calculated for each participant with a higher score indicating greater technical experience. The TE score in our test persons ranged from 2.35 to 4.24 (mean TE score: 3.34). Both, age and gender were surprisingly not correlated with TE. The speech interaction task took place in a quiet and moderately illuminated room. All participants were asked to complete the questionnaire ascertaining their technical experience as well as demographic data. Participants were then asked to stand in front of a multi-touch panel. They were presented with twenty test scenarios, investigating the image schemasunderlying free speech interaction with a heating device. They were not informed about image schemas in any way and were told that there are no right or wrong answers, since they have to test the functionality of a newly-developed heating device. Participants were asked to respond to the various scenarios, as they would also do at home and to simply tell the heating device which changes, if any, they would like the device to carry out. A content analysis was carried out to identify image schemas underlying utterances (see Table 1). Image schemas underlying speech and gesture responses were analyzed by two independent coders. To ensure reliability of coding, one coder coded the entire speech and gesture dataset. A second coder that was blind to the study hypotheses coded 25% of both datasets. In the speech interaction study, the two coders agreed 88% of the time. Each participant employed on average 6.7 different image schemas (SD 1.0) in speech interaction. The four image schemas up-down, verticality, horizontality and container were regarded as early, basic image schemas, whilst the six image schemas balance, scale, warm-cold, process (including cycle), near-far and multiplicity/ unity were regarded as later image schemas [17]. There were no significant effects for gender, age and technical experience (all p's > .05) on frequencies. In general, basic image schemas were employed significantly more often (67.8%) than image schemas occurring later in development (32.2%). In line with the second hypothesis mentioned, gender, age and technical experience were not correlated with both the frequency of early and late image schemas. This suggests that image schemas as basic building blocks of human knowledge are available to users of all ages and technical experience independently from gender. Table 1. Coding of speech utterances into image schemas [12]. With regard to [13,14] interaction should be based on (automatically retrievable) prior knowledge available to all potential users. The illustrated survey from [12], however, demonstrated that not all image schemas are equally intuitive in human computer interaction: The developmental occurrence of image schemas impacts upon the frequency of applied image schemas when interacting with technical devices. Early image schemas should, thus, be given preference over late image schemas in the interface design. Feature-value-relation (FVR) Cognitive user interfaces require a bidirectional translation between input signals and representations of meaning. While low-level signals are sequential, semantics is, in general, non-sequential. In [5], featurevalues-relations (FVR) for representation and processing of semantic information were introduced. An example is depicted in Figure 3 showing an FVR for the speech input "Increase the temperature to 23 degrees on Saturday." where the relevant values of the input are related to semantic categories relevant for the system. These categories depend on the available actions of the system and the domains of usage, and are collected in a world model. In [6,7] FVRs are equipped with weights -which are omitted in fig. 2 -and related to language modelling, whereas [8] defines several operations on FVRs. A description of a behavior control -in an instinctive and an adapting version -building upon these operations is given in [12]. In our multi-modal system any input signals are transformed into FVRs representing the individual semantics (cf. exemplary discussion regarding a touchscreen in [12]). All FVRs lead to joint input semantics, which can then be unified with the current state (another FVR). This state serves as memory between dialog turns and contains all data gathered during an ongoing dialog. By comparison of the new state with the world model -and thus identifying the goal of a dialogthe semantics of an appropriate system action can be computed. The world model encodes what data is needed to execute a specific action. Whenever there is not enough data, another dialog turn requesting more data is initiated until execution of an action is possible. Such requests can be routed to different parts of the system, e.g. available sensors, a user model holding the user's habits, initiating a visual or auditory prompt for user input, any module from where the system can get and incorporate missing data. The flexibility of the approach arises from the fact that all processing is done in terms of FVRs and thus independently from the concrete system and any modalities of input and output. So from a behavior point of view it does not matter if the task is controlling a heating or taking part in a collaborative human-robot interaction. Petri net transducer (PNT) For the technical realization, the so called Petri net transducers (PNTs) were proposed [9,10], that process labelled partial orders (LPOs), which in turn can represent FVRs. The application of PNTs to the bidirectional translation between sequences and partial orders allows us to build a seamless signal-to-semantics recognition network. Moreover we are able to prime this network by composition with a semantic structure representing an expectation on the next input. This expectation is a truly semantic one but adjusts the recognizer down to all low-level parts. On the other hand we use the same techniques for the synthesis side, where we can for example inject syntactical restrictions to adapt speech output to the users' wording. As with multi-modality on the input side, we can use different translation units to present the same semantics on different output channels. Fock space Based on a further concept described in [18], we are currently entering into a novel theory for mapping FVRs into a so-called Fock space from quantum mechanics. Given as the set of nodes of an FVR there exists a Hilbert space with the dimension | | since every element of is used as one basis vector. The Fock space is then defined as ⨁_{ =0}^{∞} ^{⊗ } where ⨁ is the direct sum and ⊗ denotes the tensor product. This allows us to use a different branch of mathematics for the processing of semantics. As a side effect we gain new insights as a first approach to discover semantic structures from data or using them for action planning as described in [18]. Realization as a hardware prototype As described in [1], the first demonstrator was realized in August 2017 as an integrated circuit device, which still involves external power supply and a RS232 interface for the communication with arbitrary electronic devices. Four microphones, a loudspeaker and the touch panel were integrated. Figure 4 shows the main board. The board (100 x 130 mm) includes two digital signal processors (DSPs), one Field Programmable Gate Array (FPGA), four RAMs, a flash memory, an audio codec and a motion sensor. The FPGA performs acoustic signal analysis, some other algorithms for speech recognition as well as signal and data routing. One DSP finalizes speech recognition and runs cognitive processes based on FVRs. Beyond it executes speech synthesis and controls the display. In next steps, our project partners Javox Solutions GmbH and XGraphic Ingenieurgesell-schaft mbH will run their algorithms for beam forming, noise and echo cancellation as well as the gesture recognition on the second DSP. Conclusions In highly user-adaptive scenarios of human-machine and human-robot interactions, we suggest a decided cognitive modelling including semantic and behavior processing. The introduced UCUI prototype is able to communicate with users in parallel via speech, gesture recognition, speech synthesis and a touch screen. The device operates autarkic and supports a widely intuitive interaction to control different kind of machines surveyed with early and late image schemas. It can ensure user privacy by its system design and does not rely on network access. Conventional interfaces cannot benefit from semantic prior knowledge. By using PNTs, semantic structurescorresponding to input signals within a multimodal hierarchical signal processing system -can be computed without premature decisions. The current demonstrator is supporting the tasks of speech recognition, synthesis, semantic processing and a simplified cognitive behavior control on the embedded platform. After our next development steps, an extended behavior control model will enrich the interaction opportunities. Furthermore, the ultrasonic gesture control will be implemented.
4,814.6
2018-01-01T00:00:00.000
[ "Computer Science" ]
The Secular Moral Project and the Moral Argument for God: A Brief Synopsis History : This article provides an overview of the history of what is termed the secular moral project by providing a synopsis of the history of the moral argument for God’s existence and the various historical processes that have contributed to the secularization of ethics. I argue that three key thinkers propel the secular moral project forward from the middle of the 19th century into the 20th century: John Stuart Mill, whose ethical thinking in Utilitarianism serves as the background to all late 19th century secular ethical thinking, Henry Sidgwick, who, in the Methods , indisputably establishes the secular autonomy of ethics as a distinctive discipline (metaethics), and finally, G.E. Moore, whose work, the Principia Ethica , stands at the forefront of virtually all secular metaethical debates concerning naturalism and non-naturalism in the first half of the 20th century. Although secular metaethics continues to be the dominant ethical view of the academy, it is shown that theistic metaethics is a strong reemerging position in the early 21st century. The Moral Argument for God: A Brief Synopsis In considering the moral argument for the existence of God, it is only appropriate to have a sense of the background and history of this particular argument in the historical debates of normative ethics and metaethics.Dave Baggett and Jerry Walls have written an excellent overview and analysis of the history of the moral argument for God's existence entitled The Moral Argument: A History (Baggett and Walls 2019).The history of this particular argument, rarely thoroughly considered, is interesting and impressive.Consider the following quick synopsis.The modern form of the moral argument proper is usually traced back to Immanuel Kant {1724-1804}, (Kant [1785] 2012 Groundwork of the Metaphysics of Morals; Kant [1788] 2015; Critique of Practical Reason; Kant [1797] 2017 The Metaphysics of Morals). 1 Among other notable thinkers who advanced a positive form of the moral argument are John Henry Newman (1801-1890) in his Aid to a Grammar of Ascent (1870), Arthur Balfour {1848-1930} (Balfour 1915, in Theism and Humanism; Balfour 1923, Theism and Thought), 2 William Sorley {1885-1935} in his On The Ethics of Naturalism, (Sorley [1884] 2015); also Moral Values and the Idea of God, (Sorley 1918). 3Hastings Rashdall {1858-1924} also made unique contributions in his The Theory of Good and Evil: A Treatise on Moral Philosophy (Rashdall 1907, vol.1&2), Clement Webb {1865-1954} in his God and Personality, (Webb 1918), W.G. de Burgh {1866-1942}, From Morality to Religion, (De Burgh [1938] 1970), 4 A.E.Taylor {1869-1945}, in his The Faith of a Moralist (Taylor 1930), W.R. Matthews {1881-1973} in his God in Christian Thought and Experience, (Matthews 1947), A.C. Ewing {1899-1973} in his Values and Reality: The Philosophical Case for Theism, (Ewing 1973), C.S. Lewis {1898-1963} in his well-known Mere Christianity, (Lewis 1978; The Abolition of Man, (Lewis 1978), 5 and finally H.P. Owen ({1926-1996}, The Moral Argument for Christian Theism, (Owen 1965) and Basil Mitchell {1917-2011} Morality, Religious and Secular: The Dilemma of the Traditional Conscience, (Mitchell 2000; Law, Morality, and Religion in a Secular Society), (Mitchell 1967). In the period since C.S. Lewis's writings, there has been a resurgence in Theistic philosophy and ethics and a resurgence specifically in the moral argument for God's existence.A brief sample of works will illustrate this point.Take for example Robert Merrihew Adams, Finite and Infinite Goods: A Framework for Ethics (1999); Robert Merrihew Adams, "Moral Arguments for Theistic Belief" (1987); John Hare, God's Command (2015); John Hare, "Naturalism and Morality", (2002); "Naturalism's Incapacity to Capture the Good Will", (Willard 2011); The Disappearance of Moral Knowledge.(Willard 2018); Mark Linville, "The Moral Argument", (Linville 2012); William Lane Craig, Reasonable Faith: Christian Truth and Apologetics, (Craig 2008); Angus Ritchie, From Morality to Metaphysics: The Theistic Implications of Our Ethical Commitments (Ritchie 2012); David Baggett and Jerry Walls, Good God: The Theistic Foundations of Morality (Baggett and Walls 2011) as well as their God and Cosmos: Moral Truth and Human Meaning (Baggett and Walls 2016); Paul Copan, "The Moral Argument", (2005); Paul Copan." Hume and the Moral Argument" (2005); Atheism?: A Critical Analysis (Parrish 2019); Stephen Parrish (forthcoming), The Nature of Moral Necessity; Natural Signs and Knowledge of God: A New Look at Theistic Arguments (Evans 2010); God and Moral Obligation (Evans 2014).(Baggett 2018; Jakobsen 2020); "The Moral Argument", (Evans and O'Neill 2021); Matthew Carey Jordan, "Some Metaethical Desiderata and the Conceptual Resources of Theism", (Jordan 2011); The Recalcitrant Imago Dei: Human Persons and the Failure of Naturalism (Moreland 2009); Body & Soul: Human Nature & the Crisis in Ethics (Moreland and Rae 2000); William Lane Craig et al., A Debate on God and Morality: What Is the Best Account of Objective Moral Values and Duties?(Craig et al. 2020); Adam (2023).Divine Love Theory: How the Trinity Is the Source and Foundation of Morality.;Dale Kratt, "A Theistic Critique of Secular Moral Nonnaturalism" (Kratt 2023). Several notable features stand out as one considers the history of the moral argument.First, it is interesting that the moral argument does not take definite form and shape as a distinct evidential argument for God until the connections from the human moral domain to God become, in some sense, problematic.David Hume (1711-1776) and others, during the period dubbed the Enlightenment, directly challenged prevailing arguments for natural theology, various religious beliefs, and the Theistic basis for morality (see Hume [1739] 1984, [1779] 1998).Hume's work was quite effective at the time, and his efforts have had a continuing and lasting influence and continue to generate strong assessments for and against (see Sennett and Groothuis 2005). Today, the questions and challenges to the God-and-moral-order connection persist.However, many current secular thinkers refuse to engage with the theistic arguments or even to acknowledge the history of the moral argument that we briefly reviewed above. 6hey remain fully committed to what will be termed here the secular moral project.Nevertheless, the sophistication of contemporary theistic-centered philosophy, theistic metaethics, various developments in natural theology, and the moral argument are notable.Next, from the history of the moral argument, it is also instructive to consider how the differing arguments proceed and which particular facets of the multifaceted phenomena of the moral order each thinker has chosen to focus on.From this, it can be seen that the moral domain, and consequently the moral argument, is a very deep, wide, and rich area that continues to present new opportunities and challenges for Christian thinkers. 7he moral argument for God's existence can legitimately take many forms that focus both on distinct features of the moral domain and differing aspects of the 'God side' of the equation.Furthermore, we are incorrigibly moral, since human beings are inescapably immersed in the moral domain.If the God of Theism exists, then the existence of this God is not only relevant to how we understand the normative order of Reality, but an account of this order will most certainly be misunderstood if the Living God is not taken into consideration (MacIntyre 2011). 8inally, the moral argument for God's existence is not only in good intellectual company, with a venerable history, but also remains profoundly relevant in today's world, on multiple fronts.To be sure, naturalism continues to be a primary challenge.However, from William Sorley onward (Sorley 1904, 1905, 1918), various Christian thinkers have successfully met the challenges of naturalism and naturalistic ethics.But there is now also the ascending challenge of secular non-naturalist metaethics (moral Platonism) 9 as well as the various and sundry versions of realism, non-realism, constructivism, error theory, and others.It is important to note, however, that there is a long lineage of Christian moral thinkers who have done good work in the past, as well as contemporary Christian thinkers, and so what I am calling the theistic moral project is also once again in the ascendency.But this project does not start from scratch; it just needs to be creatively reworked and expanded to meet the array of contemporary challenges. The Cultural Processes of Secularization The Historical Opening for Secular Ethics In this paper, "secular" simply refers to God-excluding ethical thinking.This need not involve active hostility to Theism but only that God is not considered in relation to any particular moral project. 10However, secularism, as part of secularization, involves much more than excluding God.The interest of this section is to briefly understand the broader and more encompassing story of secularization as it unfolds and to situate the development of ethical thinking and metaethics in the 21st century. The 19th-century context of British moral philosophy is vital for an understanding the historical background of the rise of the discipline of metaethics.For example, almost all thinkers reviewed in our synopsis of the moral argument for God's existence worked in this broader context of British moral philosophy. 11This broader context is vital for understanding G.E. Moore and his predecessors.Given his influential work Principia Ethica, Moore is considered a pivotal thinker who bridges late 19th and early 20th century ethical philosophy.If one examines the field of contemporary metaethics, it is evident that most contemporary metaethical thinkers view their work as part of the more comprehensive secular moral project (see Bourget and Chalmers 2014).Of course, the secular need not necessarily exclude God, and it need not entail wholesale atheism.For example, while metaphysical naturalism entails atheism, moral non-naturalism does not.Secular moral naturalism and secular moral non-naturalism disagree on the wider metaphysics of normative Reality.However, they generally agree that there is no God, or that God is of no account in systematically thinking through the moral, the ethical, the normative, 12 the prescriptive, the obligatory (categorical), the aesthetic, or the axiological, and just as importantly, the scientific.A more careful look at secularism and secularization is then in order.Charles Taylor in his (Taylor 2007) eminent work entitled A Secular Age, begins his wide-ranging study of secularization in Western society with this incisive question. One way to put the question that I want to answer here is this: why was it virtually impossible not to believe in God in, say, 1500 in our Western society, while in 2000 many of us find this not only easy, but even inescapable? 13he secular moral project we are interested in occurs in this broader opening of secularization in Western society.Few would dispute the claim that today's Western culture is secular in a considerable measure and some general sense.But what precisely is secularization, and how is it to be understood?How did secularization of the culture occur historically, and what are its implications and impact?A brief examination of these questions is essential to establish a broader context for our understanding of the secular moral project.As is common knowledge, the details are disputed.Overall, secularization is a historically complex, fully multi-dimensional, socio-cultural process that occurs over time and ranges from a given society's macro-level institutions, middle-level organizations, the family-household, and micro-level personal experiences of the lifeworld.The personal lifeworld is a part of this broader process of secularization.The lifeworld involves the whole taken-for-granted practical world of a person's day-to-day life embedded within a wider umbrella of organizations and institutions.It is the dimension of personal, takenfor-granted beliefs, experiences, sensibilities, and everyday practices.The embedded individual's day-to-day lifeworld and wider embedding macro context makes up the full range of the story of secularization. A full account of secularization would deal with this full scope.But this scope is obviously too broad and complex to be examined here.Yet awareness of this broader scope helps us point out a few common misconceptions about secularization.Clearly, secularization involves more than a mere change of ideas and beliefs.Also, it is more than simply a change of beliefs; it is a wholesale change of life practice and worldview.The material conditions of secularization are deep and diffuse.Sometimes, the secularization of society is caricatured as the advancement of reason and science that results in the inevitable decline of irrational belief in God and religion.This sort of activist characterization is much too quick and involves a particular vested spin on how secularization is to be understood.Charles Taylor convincingly argues against the idea that secularization is a one-sided story of the loss of God, the inevitable outcome of modernization, and a coming of age that has thrown God off.He calls this view the subtraction thesis. Most importantly, the subtraction thesis cannot explain the persistence of religious belief and practice in the West and outside the Western world.But neither can it readily explain the optimistic side of secularization-a positive humanist belief in and total commitment to unbridled human powers of self-determination, human autonomy, rationality, general human flourishing as an ultimate good, and a fully human-sourced morality (ibid.pp.253, 572).Clearly, this moral repertoire is more than mere subtraction of God. A brief survey of some of the broader and deeper dimensional changes is helpful to situate our analysis.Historically at the macro level from the top-down, secularization involves a complex process of institutional transformation, separation, and differentiation over time.The economic dimension and the rise of capitalism involve new technologies of production, transportation, finance, energy, mechanization, architecture, warfare, and communication.In part, this is the industrial revolution.The economic order becomes rationally objectified and differentiated as a distinct order of production, consumption, commodification, and wealth; this also requires the innovative birth of modern finance.The economic order also shapes both the bottom-up content and practice of personal disciplinary virtue at the micro level that capitalism requires; workers must be disciplined and specialized to be productive and contribute to the civil and economic order.Next, it is important to consider the unfolding political dimension and the rise of the nation-state that involves new forms of the political structuring of power, social ordering, and law. Constitutionalism is born, and its notion of political rights comes to the fore.Along with this, political, military, and economic power can be projected across the globe as never before by various competing nation-states, hence the global Western colonialist legacy and the continued inertia regarding globalization.Commodities can be sourced and extracted from across the globe.The late 20th century and early 21st century see the continued rise of multinationals.Then add the religious dimension-in particular, the reformation. The reformation becomes a constant force for radical religious reform that generates religious institutional differentiation and religious organizational pluralization.The secular order comes to encompass and embed the religious order.Some view this religious and political separation as the heart of the secularization process.The transformation of religious practice and pluralization also occur from the bottom up at the individual practice level within middle-range organizations.The reformation thinkers challenge and attenuate a sacred/secular distinction of practice and vocation.With the ascendance of the physical sciences, new forms of knowledge in the sciences, mathematics, and the arts proliferate and accumulate.These transform our understanding of the physical world, from astronomy, physics, chemistry, biology, medicine, and the arts-all contributing to developing new technologies and accounts of human nature and the physical world.Evolutionary theory becomes central to the sciences from the middle of the 19th century onward.Additionally, the sciences transform the dimension of education; the Academy shifts from a classical educational format to a more science and technical-based format.The human-centered place in the cosmos gives way to the peripheral human place in the more expansive but finite universe. Of course, Taylor understands that the material conditions of modernity are important.But these conditions do not cause secularization, explain secularization, or explain the numerous changes associated with secularism (Taylor 1989, pp.310-13, 393-418).His analysis of secularization is a wide and detailed interdisciplinary account.It is strongly interpretive.The analysis here will build upon Taylor's analysis to further clarify the subject matter.He identifies several significant transformations in his work that are important to recognize clearly.There is a transition from a fulsome transcendent theism to a much thinner and remote providential deism (Taylor 2007, pp.221-69). 14The personal God of theism is no longer seen as an agent that speaks and acts in history (ibid., pp.274-75).In this shift in belief, the broadest horizon of Reality and humanity's relationship to it is transformed.The relation of God and the created world and the relation of God to the human order of things are reconceived and reconstituted in different ways (ibid., p. 43).In part, this results from what Taylor calls the "great disembedding" in which the social and ritual facets of religious practice and experience are transformed and broken up by decisively shifting towards the individual (ibid., pp.146-58).The reformation contributes to this shift.In many ways, this is a positive shift.However, with the eclipse of a personal God, the new order in many ways also becomes a complex, impersonal order; a vast sea of governing cosmic natural laws, impersonal causes and mechanisms, formulas and functions, impersonal social and historical laws, impersonal moral ideals, codes, and requirements.(ibid., pp.270-93).However, that the world was made for human beneficence remains central to both theism and deism.In addition, religion becomes narrowed to a diffuse but rather thin moralism in deism (ibid., p. 225). From the shift to providential deism, only one step away from atheism, there is the related transition to seeing the world in which one lives as disenchanted instead of enchanted.I describe these changes this way.A "disenchanted world" is a world in which a barrier exists between the lifeworld and what is referred to here as World2.By World2 is meant the immaterial world that includes God, who is Spirit, the gods, spirits, angels, demons, invisible powers, and even the dead, including the world of the afterlife (ibid., p. 147). 15It is obvious that, in different cultures, World2 is conceived in different ways.An "enchanted world" means that whatever powers are taken to occupy World2 can influence World1 (the physical world) and the lifeworld of individuals.As part of this shift to disenchantment, there is a transformation towards seeing the self, the lifeworld, as "buffered" rather than "porous".In describing the lifeworld as "porous", Taylor means that there is an open connection, penetration, and interchange between World2 and the lifeworld (ibid., pp.35-43).By "buffered", he means that an open porous interchange is closed off (ibid., pp.135-42). 16There is outright denial that World2 exists in disenchantment, or the lifeworld is isolated or buffered from World2. Secularization also involves a transformation in one's sense of time and history.Without a transcendent God, the broader temporal horizon is still considered as linear, but the sense of time becomes flattened, a strictly horizontal flow of time.The lifeworld is only situated within real World1 time (ibid., pp.54-59).But this horizontal flow of time is still defined by a linear notion of historical progress on all fronts.These combined changes contribute to what Taylor further describes as a developing crystallization of an "immanent frame" of experience and thinking (ibid., pp.542-57).By this, he means that the totality of human life and thought become enframed within this-worldly immanence instead of an other-worldly transcendence.Central to the immanent frame is "exclusive humanism", (ibid., pp.242-69). 17Humanism of this sort is a radical shift, an intra-human, "inward turn in the form of disengaged reason", (ibid., p. 257). 18Exclusive humanism becomes a fully rational and moral vision in which human nature is valorized (ibid., p. 256). 19It is thought possible to utilize exclusive humanly sourced powers of reason, morality, values, and the sciences to achieve exclusively human ends of progress and human flourishing.Associated with this is a transition from a universal ethic grounded in Christian agape to a universal and idealized commitment focused exclusively on human beneficence in this world.The key is that all of this is God-excluding, either actively or passively.It is a distinctively anthropocentric moral ideal and commitment (ibid., p. 247).Humanity alone becomes the locus of a positive and exclusive humanist belief in, and total commitment to, unconstrained human powers of self-determination, human autonomy, rationality, political freedom, universal justice, generalized human flourishing as an ultimate good, and an exclusively human-sourced morality and scheme of values.Taylor convincingly argues that none of this would have been possible without the prior groundwork laid by Christian theism.He states, . . .all present issues around secularism and belief are affected by a double historicity, a two-tiered perfecttensedness.On one hand, unbelief and exclusive humanism defined itself in relation to earlier modes of belief, both orthodox theism and enchanted understandings of the world; and this definition remains inseparable from unbelief today.On the other hand, later-arising forms of unbelief, as well as all attempts to redefine and recover belief, define themselves in relation to this first path-breaking humanism of freedom, discipline, and order.(ibid., p. 269) As regards religion, after the reformation, an unending and continuous pluralism of both belief and unbelief unfolds (ibid., p. 437; see also MacIntyre 1998). 20In many respects, this development is positive.Over time, both belief and unbelief are subjected to tremendous cross-pressures and what Taylor calls fragilization (ibid., pp.303-4).He has dubbed this contentious explosion and proliferation of religious and spiritual options beyond orthodoxy a "nova effect".The pluralized world of today lives in the aftermath of this nova effect. Mill, Sidgwick and Moore This section will explore more specific historical developments while taking the preceding as a general context for understanding the secular moral project.By the middle of the 19th century, when our analysis will now pick up, the full opening of secularization is firmly in play and continuing to unfold.Into this broader opening, the secular moral project develops.Three specific thinkers, John Stuart Mill, Henry Sidgwick, and G.E Moore, are relevant to our analysis.Here is the logic behind selecting these three successive thinkers as decisive for the development of the secular moral project.Moore is the transitional thinker leading into the modern period who set the debates for early 20th century metaethics.These debates continue into the present.But, prior to Moore, Sidgwick is the critical thinker that sets the table for Moore by 'doing ethics' differently, by doing metaethics.Sidgwick, along with Mill, laid the groundwork for the modern period by laboring to work out the basis and details for a new and fully adequate secular ethics.Hence, these three thinkers are decisive for understanding the development of the secular moral project. Theism has always provided a natural and unproblematic placement within which the moral order of things is fittingly nested (Parfit 1987, pp.452-54). 21If God exists, then the moral order is grounded.To be sure, the details of this are worked out in different ways in Christian, Jewish, Islamic, and other versions of Theism.But all Theists agree that God, who is personal, who is fundamentally a moral, spiritual being, by virtue of being God, must somehow be the ultimate source of the normative order.In a Theistic world, all credible, ethical sources involve God and are intimately linked to God.They must depend upon God in some fundamental way.This dependence on God also has profound practical implications.In this sense, historically, a God-given moral order not only structured and guided the whole of life in thinking and practice but also indirectly showed God's undeniability (Taylor 1989, pp.303-4).This order of human life needed God.However, once the deniability of God becomes broadly plausible, the very foundations of the moral order are also questioned.Secularization then forces a rethinking of the moral order down to the foundations.Once there is a total commitment to an exclusive humanism that is optimistic about rationally elaborating a fully humanly sourced moral vision, the gauntlet is laid down for fully engaging and developing the secular moral project.This need must be filled and hammered out by serious secular moral thinkers. Otherwise, the secular project will morally flounder.Secular thinkers are forced to squarely face a whole host of thorny questions and problems concerning the moral order, given the premise and conditions of secularization.Mere reactionary critiques of Theism will no longer suffice in this regard.Secular worldview logic has a straightforward premise; since God does not exist, this fact must be squarely faced on all fronts.The big questions still loom very large indeed.For example, how should we think and live?Why should we believe and live this way or that way, and how can this thinking and living be systematically formulated in a strictly secular view of Reality?The secular moral project becomes central to this broader set of pressing questions and concerns. John Stuart Mill John Stuart Mill (1806-1873) not only feels the need for such an account but also takes up the challenge of trying to develop one adequately.Mill's writings are prolific, and his impact was significant. 22Several aspects of his thinking will be briefly discussed before moving on to the work of Sidgwick and Moore. 23Mill was raised by his father in the tradition of philosophical radicalism to become the ultimate Victorian intellectual and utilitarian reformer (Brink 2018, p. 3). 24It is significant that as a young man, between the years 1826-1830, Mill suffered from a severe period of depression.He experienced a deep intellectual and emotional crisis (ibid., p.4).In the period that Mill writes, he pens not only his classic work on the ethics of utilitarianism (1861) 25 but also philosophical works arguing against various elements of Theism and natural theology by critiquing the standard pieces of evidence put forward in favor of Theism (Mill [1874] 1998). 26Mill was an exclusive humanist who advocated what Auguste Comte called the religion of humanity (Raeder 2001, 2002). 27In this religion surrogate, humanity becomes a kind of object of devotion, as both the source and object of moral good and endeavor.Two additional things should be noted about the context in which Mill writes.First, Paley's work on natural theology (Paley [1802] 2017 28 ) was still highly influential at the time, so much so that Mill felt compelled to respond to the prevailing arguments of Paley. 29Frederick Rosen takes Paley's natural theology as the spiritual core of the metaphysics of the British Enlightenment (Rosen 2005, p. 113).But Paley's work in moral and political philosophy (1785) was also highly influential (Schneewind 1977, p. 177). 30Paley was a proponent of a version of Theistic utilitarianism (Paley [1785] 2017). 31Both of Paley's works were commonly used as textbooks for years in the first half of the 19th century (Fyfe 1997). 32econd, Mill fully recognized that an adequate and complete secular ethics had yet to be worked out.In 1847 Mill urged John Austin to write a systematic treatise on morals, without which the kind of moral reform Mill, Austin, and others were hoping for could not be achieved (Schneewind 1977, p. 178). 33Mill also shared his views in 1854 that "ethics as a branch of philosophy is still to be created" (ibid.) 34This year, 1854, was the same year that Utilitarianism was drafted; after some 30 years of thought, his final revisions came in 1859, and it was finally published in 1861 (Irwin 2011, p. 364).Initially, it was only marginally impactful.Only gradually was it noticed and given critical attention (Schneewind 1977, pp.178-88).It is now the best-known account of classical utilitarianism to date.But Mill was no staunch atheist.He was a Theist of sorts, a believer in a finite Theistic God, what some have referred to as a "probable Theist".This can be seen both from the practical side of his life and his posthumously published essay entitled "Theism".(Settanni 1991; Devigne 2006; Carr 1962). 35Although Mill worked to contribute to the secular moral project, he also recognized that it was far from complete.He saw it as just beginning.But clearly Mill believed that there was a comprehensive moral answer, though he could not provide it fully.This point is significant.Mill is not committed to anything like moral skepticism or moral nihilism. Moreover, he developed his utilitarian account after a long line of previous thinkers, both secular and religious, had espoused some form of utilitarianism. 36He attempted to remedy previous problems and misconceptions throughout his argument, which sought to develop a convincing account of utilitarianism and provide a kind of "proof" of utilitarianism. 37Most agree that his proof is less than successful.Nevertheless, Mill was a highly influential political and moral reformer, philosopher, and statesman; his moral philosophy was worked out toward these larger ends.He believed philosophy could change how people thought and lived regarding moral good and that this could have a positive social, political, and economic impact.This project is in complete agreement with Mill in this regard.How important, then, is Mill?Given his work, David O. Brink takes Mill as the most influential philosopher of the 19th century in British moral philosophy. 38 Henry Sidgwick While Mill's work leaves the secular moral project unfinished, still to be created, it also overlaps and leads into the work of Henry Sidgwick (1838-1900), author of The Methods of Ethics (1907, 7th edition). 39Schneewind comments on this monumental work of Sidgwick. It was not until Sidgwick's Methods, which tried to reconcile these two schools (intuitionism and utilitarianism), that all the characteristics of a modern treatment of ethics were fully and deliberately brought together in a single work.Sidgwick is often described as the last of the classical utilitarian's.He may with as much accuracy be viewed as the first of the modern moralists.(1997, p. 122) In what ways might Sidgwick be considered the first of the modern moralists?It has mostly to do with the way that Sidgwick went about the task of ethical philosophy, and the reasons why he did it. 40A lot of this can be gleaned from his introduction to the Methods.Sidgwick completed the first edition of the Methods when he was 36 years old in 1874.The final seventh edition was completed and published after his death in 1907.He spent his entire academic life revising the Methods.His influence is clearly seen in that the dominant forms of the problems of later British and American moral philosophy were, in many important ways, shaped by his work. 41(Harrison 1996).In the very first sentence of the Methods, Sidgwick points out that the boundaries of ethics have been variously and vaguely conceived.Deliberately and clearly establishing the boundaries of ethics was thus a major part of what Sidgwick set out to do in the Methods (1907, pp.11-12). 42hroughout the Methods, he works to clearly differentiate ethics from other disciplines, such as politics, economics, philosophical metaphysics, or theology (ibid., pp.78-80).Sidgwick also shows how ethics must be distinct from psychology and sociology (1907, p. 2). 43When Sidgwick writes, moral philosophy includes these various disciplines within its scope.According to Sidgwick, ethics is an autonomous discipline standing on its own (ibid., 507).Its aims, sources, and boundaries should have clear limits while not borrowing fundamental premises from other sources (ibid.). Sidgwick thus establishes the autonomy of ethics, a significant achievement in the secular moral project.Establishing the autonomy of ethics also helps to further distinguish between first order ethics and second-order metaethics.This distinction is central to 20th and 21st-century ethical theory and had much to do with Sidgwick's work. 44For example, first order ethics might discuss what our various duties are.Second-order metaethics seeks to understand the nature of duty itself-what duty itself fundamentally consists of.Much of Sidgwick's discussion in the Methods is worked out at the level of the metaethical, as one can see in his analysis of what is "good", "right", the notions of "ought", "virtue", "duty", and so on.In the wake of the Methods, ethical analysis at the abstract level of metaethics has become commonplace, an independent specialty in ethics.According to Sidgwick, a related claim follows from his analysis that there is a fundamental distinction between "is" and "ought".This means that a truly categorical "ought" cannot be derived from an existing particular thing or an infinite collection of particular things (1907, pp. 25, 396; see also Phillips 2011, pp.55-57).Next, Sidgwick fully recognizes that the situation within which the ethical theorist works is pluralistic.Sidgwick seeks to understand and explain why and how this is so.The major ethical viewpoints in British moral philosophy at the time that Sidgwick wrote were egoism, intuitionism, and utilitarianism.This pluralism is the starting point for the Methods, analyzing its character and working out ethical theory to cut through various confusions induced by conflicting viewpoints. 45The Methods seeks to work out a unique synthesis in this regard.By and large, Sidgwick accomplishes this.He ends up synthesizing an intuitionally grounded utilitarianism. 46otwithstanding Sidgwick's efforts, however, ethical pluralism since the Methods has only increased.Next, theoretical ethics for Sidgwick is a fully human undertaking.This is key.Fundamentally, ethics is a task undertaken by human beings for human beings, and it is basically about human beings.The task of ethical thinking excludes anything above and beyond the human, even if such might exist (1907, pp.114-15).Sidgwick's exclusive humanism is evident here.Ethics is a fallible human project and mostly a secular moral project.But this does not mean, as will be further seen, that Sidgwick subscribes to atheism.He does not.Nevertheless, after Sidgwick, the secular moral project is in full swing.So then, for Sidgwick, the project of ethics is progressive given that first order ethical views will change over time.The ethical views of the future will probably differ from those of the present in the same way that the views of the ancients differ from those of the moderns.What "ought" consists of will not change (the metaethical), whereas what we take to be our specific "oughts" may very well change over time (Sidgwick and Sidgwick 1906, pp.607-8). By a method of ethics, Sidgwick means "any rational procedure by which we determine what individual human beings 'ought'-or what is 'right' for them-to do, or to seek to realize by voluntary action" (Sidgwick 1962, p. 1).He recognizes a diversity of methods in ordinary practical ethical thinking (ibid., p. 6; see also Brink 1994, pp.179-201).In this, Sidgwick identifies three primary methods: egoism, utilitarianism, and intuitionism.According to Sidgwick, the study of the methods of ethics should involve "systematic and precise general knowledge of what ought to be".(Ibid., p. 1).Ethics is thus clearly focused on the categorical, on oughtness.Sidgwick is an "all-purpose" rationalist in that ethics must be worked out and made precise through human reason.He is not an extreme rationalist believing that reason is all there is. Sidgwick believes that this kind of rational study of ethics can be carried out in a somewhat "neutral" fashion, in the sense that one need not be rationally pre-committed to a particular outcome in the analysis.But there is a conflict here.Any supposed neutrality can never be complete because it will conflict with the practical requirement that compels us to ethical thinking and action (ibid., p. 14).After all, a method, according to Sidgwick, is how to think about what is right (and wrong) to do.While Sidgwick believes that common sense morality has practical value and provides a bedrock for moral truth and practice, it is nevertheless imprecise and unclear in many respects.Rational analysis of ethics, therefore, must give precision and clarity to common sense morality so that ethics attains the position of a rational science.It must transcend common-sense moral thinking. 47Here, the notion of science, as Sidgwick is using the term, is the looser 19th-century sense that was common at the time.But he did see the natural sciences as a paradigm case of how progress is achieved.Since Sidgwick works to delineate "fundamental principles" of ethics along intuitionist and utilitarian lines, and rejects both logical and systemic contradictions as negative tests for truth, his epistemology is appropriately classed as moderately foundationalist and coherentist (1907, p. 509).Sidgwick sometimes compares ethics to how geometry is worked out with axioms and derivations (ibid.).Sidgwick is moderate, given that the Methods focus on practical reason, what one "ought" to do, and how to determine right conduct.Finally, Sidgwick aims overall toward a "harmonious system" in his exposition of the methods of ethics, but he explicitly warns that he is not striving to forge a single, unified, harmonious systematic method (ibid., pp.13-14, 496). It is generally agreed that Sidgwick is accurately described as an ethical non-naturalist (Crisp 2015).But Sidgwick is no moral Platonist.He does not use the language of moral properties or ontology and does not refer to any Third Realm or the like to elaborate his version of ethics.He is what today is termed a moral realist of the cognitivist sort (Sayre-McCord 1988b). 48He rejects the notion that the "natural" can furnish an ethical first principle to work out a consistent metaethical system (Sidgwick 1962, p. 83). 49He also rejects the notion that the ideal of Ultimate Good or Universal Happiness can be established naturalistically (1907, p. 396).Sidgwick takes naturalistic ethics to be inadequate in at least two respects.First, all versions run afoul of what he takes to be the fundamental is/ought distinction.The categorical "ought" cannot be derived from any collection of natural particulars, nor can ethical ideals be similarly established.Second, the various naturalistic proposals each have their particular problems that lead Sidgwick to reject them (Phillips 2011, pp.14-15; Crisp 2015, p. 11, nt.18). 50owever, Sidgwick also rejects Theistically grounded ethics but for different reasons (1907, pp.504-7).Sidgwick's relation to Theism is intriguing and ambivalent and merits a closer look.As is well known of Sidgwick, he resigned his fellowship at Cambridge in 1869 because of reservations concerning the requirement to assent to the 39 Articles of the Anglican Church in order to teach (Tribe 2017; Medema 2008). 51The 39 Articles were expressly orthodox in content and practice.Sidgwick's resignation is often referred to as his turbulent "crises of faith". 52But Sidgwick does not become an atheist, although he fits the profile of a secularist rather well. 53He might best be described as an agnostic with leanings toward Theism, or a weak Theist with agnostic leanings. On the one hand, Sidgwick concludes in the Methods that Theism cannot be established "on ethical grounds alone" (1907, pp.506-7).Most theists would agree.On the other hand, Sidgwick writes in personal correspondence in 1898 that "the need of Theism-or at least some doctrine establishing the moral order of the world-seems clear to me". 54gain, most Theists would agree.Sidgwick seems to be gesturing toward a version of Providential Theism. Along with rejecting orthodoxy, he also saw that Paley's natural theology and moral philosophy had pretty much exhausted itself by the mid to late 19th century.It was no longer interesting and compelling for many thinkers.Again, most theists would agree.So Sidgwick is fully committed to and engaged in the secular moral project.We previously noted that Sidgwick sought to establish ethics as an autonomous discipline with distinctive non-theological first principles rationally derived.This goal partly forms the basis for his acceptance of intuitionism (Skelton 2010). 55Sidgwick concluded that intuitionism and utilitarianism, thought by most to be in conflict, could be reconciled.But he also sought to reconcile individual personal happiness (egoistic hedonism/self-interest) with ultimate collective happiness (utilitarianism/duty to others) as an ideal of ethics. 56However, he finally concluded that these two methods of ethics could not be rationally reconciled.If a person acts in self-interest, this might be rational.If a person acts for the greater happiness of others, this, too, might be rational.Sidgwick concluded that no unified universal, categorical "ought" could be synthesized between these two principles.Though not always, but sometimes, these two methods will necessarily conflict.For Sidgwick, this is more than a moral conflict, intellectual tension, moral difficulty, or philosophical paradox.He describes it as an "ultimate and fundamental contradiction" of intuition and judgment that informs practical reason and, along with such a contradiction, the attendant failure of a non-contradictory, rational ethical theory. 57This was a final and severe blow to Sidgwick's systematic aspirations.Sidgwick's conception of practical rationality is that it provides complete and conflict-free guidance (Holley 2002). 58Therefore, as is Sidgwick's notoriety, he ends up with the contradictory and intractable "dualism of practical reason".This dualism he takes to be a rational contradiction at the heart of his ethical system that he cannot resolve within his exclusive humanist and rationalist commitments.Sidgwick judges the implications of this to be severe.He even admits that this contradiction threatens to open "the door to universal skepticism" (1907, p. 509).He never gave in to such skepticism.He concludes the final edition of the Methods this way: I do not mean that if we gave up the hope of attaining a practical solution of this fundamental contradiction, through any legitimately obtained conclusion or postulate as to the moral order of the world, it would become reasonable for us to abandon morality altogether: but it would seem necessary to abandon the idea of rationalizing it completely. ... If then the reconciliation of duty and self-interest is to be regarded as a hypothesis logically necessary to avoid a fundamental contradiction in one chief department of our thought, it remains to ask how far this necessity constitutes a sufficient reason for accepting this hypothesis.This, however, is a profoundly difficult and controverted question, the discussion of which belongs rather to a treatise on General Philosophy than to a work on the Methods of Ethics: as it could not be satisfactorily answered, without a general examination of the criteria of true and false beliefs. 59 must bear in mind that this is the mature Sidgwick writing here and not the Sidgwick of the oft-quoted concluding passage of the first edition of the Methods of 1864 that was effectively revised out of subsequent editions and never to reappear. 60We can see in these final words that all of the things that have done good work for Sidgwick throughout the Methods now seem to work against him: his exclusive humanism, his rationalism, his utilitarianism, the autonomy of ethics, his quest for a unified and perfect ethical ideal, his inveterate precisionism, his thin providential Theism, and finally his sidelining of full-orbed Theism as integral to a completed metaethics.But in these final thoughts, he clearly states that, for ethics to be rational, there must be a reconciliation of the "fundamental contradiction" as a "logically necessary" hypothesis.In other words, reconciliation is achievable, but he does not know how.His gesture toward a solution from "General Philosophy" is hardly optimistic.The language of logical necessity here is strong indeed.Most contemporary Sidgwick interpreters think it is too strong 61 and demurred on Sidgwick's precisionist and perfectionist tendencies and how he frames the problem (Crisp 2015; Parfit 2011, vol.1). 62But there is another way to see things. Ironically, what Sidgwick actually discovered in his trek through the moral trees as he exits out of the moral forest was a version of the moral argument for the existence of God.So argue Baggett and Walls. 63 Notice how Sidgwick looks to the world's moral order for a possible resolution.Theism could provide the basis for this order.Sidgwick saw this, as he stated in personal correspondence.Not, of course, the thin and exhausted Theism of Paley's natural theology or the Victorian moralism of the day."Full moral rationality requires an ontological ground of morality that, among other things, 'guarantees' an unbreakable connection between morality and the ultimate self-interest of all rational beings". 64This rationality must involve both God and reconciliation of the moral order in life after death, that is, in a world to come. 65Can a full account of Theistic metaethics provide for such rationality?Ironically, Sidgwick's Methods create an opening for just such a moral argument for God, but Sidgwick himself did not see a way to solidify the connections and ideas.While a Theistic relation to the moral order seemed intuitively evident to him, he could never work out a rationally clear account of the nature of that order within either a dogmatic Anglican orthodoxy or an exhausted Paleyan natural theology, both of which he rejected.But he also could not work out a final reconciliation of the dualism of practical reason within an entirely secular moral logic.For if such a logic failed of logical necessity, it thus failed of moral necessity. For Sidgwick, there was nowhere else to go.He had come to an end of his resources.But clearly, Sidgwick still believed there was an objectively right and true answer to his quest.Yes, Sidgwick, who some consider the most significant moral philosopher of the 19th century, was fully committed to the secular moral project (Broad 1930). 66But the methods of ethics could not be fully rationalized as Sidgwick had hoped.We can see then that this left his task unfinished and unfinishable, given his array of secular commitments and his particular formulation of ethics methods.As the generations invariably shifted toward the young and optimistic thinkers of the early 20th century, "old Sidg", as Bertrand Russell and other of his young students called him, died in 1900. 67Much of his labor fades into obscurity. 68 G.E. Moore One of these students was the young G. E. Moore (1873-1958).Moore was a student of Sidgwick's, and Moore's impact on 20th-century ethical thought beyond Sidgwick is indisputable.Contrary to popular belief, the publication of Moore's most well-known work, the Principia Ethica, did not rock the world of ethical philosophy in 1903 when it was first released (Moore [1903] 1993). 69It wasn't until the 1930s that the influence and importance of Moore's primary work were widely recognized. 70It is one of his earliest articles that gave him early fame-"The Refutation of Idealism", also published in 1903. 71idgwick's influence on Moore is evident throughout Moore's work. 72In the Principia, Moore trod well-worn paths, and many of his ideas were shared by his contemporaries. 73owever, this diminishes neither Moore's originality nor his impact.But it is important to put that impact in proper context regarding the history and thought that concerns us (Hurka 2003). 74oore's work was highly impactful for several reasons.The first is Moore's rhetorical style.The Principia first strikes one as crisp, succinct, to the point, laser-like, and exudes rhetorical confidence.It is laid out in what appears to be a powerfully logical format, and he looks to be proceeding succinctly and rigorously.This style is very different from other writing in philosophy at the time.For example, it contrasts sharply with Sidgwick's expositional, wandering, wordy, heavy, and unconcise style (Eddy 2004; MacIntyre 1998). 75n the preface of the first 1903 edition of the Principia, Moore asserts that the problem with virtually all past philosophies, and ethics in particular, is their need for more clarity in questions, answers, and analysis.Moore set out to rectify all these confusions of the past in the Principia (Moore [1903] 1993, pp.33-37).Who would not be interested in a serious philosophical work that genuinely set all previous philosophers straight?The turn of the 20th century was rife with this kind of visionary optimism. Secondly, like Sidgwick, Moore claims to be developing a "scientific ethics" in the sense of science common in the late 19th century (Moore [1903] 1993, p. 55).According to Moore, all previous ethical systems of thought before his work failed to achieve this status of a rigorous science of ethics (Willard 2018, p. 113).Moore spent much effort detecting errors and fallacies, defining terms, analyzing the language of ethics, and parsing the words being used, as well as the sentences, concepts, and ideas.This way of doing philosophy was part of the beginnings of the analytic tradition, with its linguistic turn, that still pervades much of technical philosophy today. 76One can agree with Moore that the muddled use of language leads to muddled philosophy.But the analysis of language itself cannot yield a complete understanding of the moral domain, in whatever ways this domain is conceived.Ethics and values are more than language use.The central strategy was to get at the meaning of the ethical by analyzing the language of the ethical, which then, it was hoped, would enable one to clarify the concepts and content of ethics and thereby forge a science of ethics.For Moore, the central factor around which all ethical thinking revolved was that of intrinsic good (Moore [1903] 1993, p. 55).His central question was, "what is good?"Moore is not asking the question, "what is the good?", that is, the highest good in Plato's sense of the summum bonum, but rather, what is the nature of good itself as we use the term in our everyday moral language? 77Put more precisely, how is good to be defined?And then how is this definition to be applied to the things we refer to as good and understand to be good?(Moore [1903] 1993, p. 57) 78 Moore believed that a science of ethics would be based on a precise and accurate conception of intrinsic good.He also carried forward the commitment to British utilitarianism as well as intuitionism, but he argued that good is the fundamental principle of ethics, and the definition of good is the central question of ethics.So then, according to Moore, the notion of right is derivative from that of good.Good makes an action right and not the reverse.In Moore's day, the analysis of properties had not been developed thoroughly in philosophy, so Moore's analysis of moral properties and ontology is very limited in scope.He also never technically deploys the notion of supervenience, a development that later ethical thinkers will find almost indispensable to conceptualize the metaphysics of the moral domain. 79Nevertheless, he argues that good is not a natural property, nor is it a supernatural property.He thus rejects both ethical naturalism and ethical Theism.He claims instead that good is an indefinable, irreducible, simple, intrinsic, and nonnatural property (Moore [1903] 1993, pp.60-61; Moore 1962, pp.89-100).This notion of a nonnatural property was both interesting and intriguing.Strangely, it looked like Platonism but was curiously different than classical Platonism (Moore [1903] 1993, pp.227-31).Yet precisely how this notion was to be taken became a thorny issue that carried over into subsequent debates and remains disputed in current debates. Thirdly, Moore utilized two argumentative strategies in particular to make his point that good is a nonnatural property.He dubbed the two centerpiece arguments in the Principia the "open question argument" and the "naturalistic fallacy" (Willard 2018, pp.116-17). 80These two things and the question of non-naturalism were particularly disputed.Clarifying these matters absorbed much of the efforts of the first half of 20th-century secular moral philosophy (see Prichard 1912; Frankena 1950; Broad 1930; Ross 1930, 1939; Geach 1956). 81t is generally agreed that the naturalistic fallacy is no formal fallacy (Sinclair 2019), 82 that the open question argument is formally invalid but interesting and sometimes useful, 83 and that Moore's way of conceptualizing a nonnatural property contributed to many unfruitful controversies that plagued 20th century secular moral philosophy (Baldwin 2003; Darwall et al. 1992; Scott Soames 2005; Miller 2013; A Warnock 2007). 84The issues are still discussed today, and the notion of a secular non-naturalist metaethics has recently been revived with full force (see Enoch 2013; Shafer-Landau 2005; Wielenberg 2014; Huemer 2008; Kulp 2017, 2019, and, critical of Fourthly, at the time that Moore wrote, it was believed that Moore had achieved a knockout argument against ethical naturalism, that he had actually refuted it.It indeed appeared so.Moore states his rejection of ethical naturalism in no uncertain terms throughout the Principia (Moore [1903] 1993, pp.70-71).And if Moore had actually achieved a knockout argument against ethical naturalism, then that would have stood as a significant philosophical achievement (Sturgeon 2003). 87If the naturalistic fallacy and open question argument fail to hold, and nonnatural moral properties remain mysterious, Moore's case against naturalism is greatly diminished. 88inally, what of the legacy of Moore's work (Horgan and Timmons 2006)? 89Mary Warnock argues convincingly that the Principia dealt the final death blow to grand metaphysical theories of ethics, particularly those of Idealism.Moore's rhetorical style also had a significant effect. 90But he had many second thoughts about the ideas in the Principia, as the preface to the second edition (1922; see also Baldwin 2010) shows. 91Moore reflectively described the Principia as "full of mistakes and confusions". 92But he still held that intrinsic good was not identical to any natural or supernatural property. Nevertheless, in his well-known "A Reply to My Critics", he acknowledges that his characterization of naturalism seemed to him now (1942) "silly and preposterous".He also admits, "I agree, then that in Principia I did not give any tenable explanation of what I meant by saying that 'good' was not a natural property". 93He also acknowledged that his notion of an intrinsic property was vague and unclear. 94So then, if the three central theses of the Principia do not stand and Moore's characterization of naturalism, against which he is predominantly arguing, is admittedly fuzzy, there is little of Moore's ethical philosophy that remains standing. 95ut there is another big worry which Dallas Willard points out in Moore's Principia that is typically ignored by friend and foe alike. 96Relating to right conduct, after providing a long list of impossible consequential qualifications to evaluate right conduct, Moore concludes that "(w)e never have any reason to suppose that an action is our duty". 97Willard rightly takes this to be an eye-popping, concussive conclusion.He further points out that Moore never retracts this view; instead, he reinforces it in his summary and conclusion on right conduct that follows. So then, Moore's vaunted boldness in 1903 in aiming to correct all the philosophical errors of the past is now laid bare in his honest and unpretentious admissions of philosophical incoherence and confusion on key details.If the past of ethical philosophy was fuzzy in 1903, it is even less clear or certain after Moore.In Moore's defense, one must acknowledge that the issues he works through are quite difficult.Nevertheless, after the Principia, the secular moral project is reeling, trying to find its footing and sense of direction.The impact of the Principia propels the secular moral project in several different directions.God is nowhere an option for Moore or any of the other secularists.Theistic thinkers did not significantly interact with Moore's work.In 1907 Hastings Rashdall, a lucid theistic ethical thinker of the early 20th century, remarks in the preface to The Theory of Good and Evil: A Treatise on Moral Philosophy that the work of Moore (1903) came too late for him to incorporate it into his newly published volumes (1907). 98William Sorley does mention Moore in a couple of places in his work but with no significant interaction, since Moore's work completely ignores the question of God in relation to good, right, and the ethical.In the early 1940s and 50s, a little-known thinker, C.S. Lewis, no technical philosopher, gives his radio lectures in Britain that are later published on the moral argument for the existence of God.Moore is never acknowledged.To this day, Lewis's works remain readable and compelling classics of moral argument and analysis. 99As a former atheist turned Theist, Lewis clearly perceived his day's intellectual and ethical vacuum and responded to it accordingly. In the broader academic context of the period, two primary heirs of Moore's ethical thinking are the emotivists and the intuitionists of the 1930s and beyond. 100Intuitionism breaks some new positive ground, while emotivism devolves into the position that ethical (and theological) propositions contain nothing truly factual but only reflect a person's feelings of approval or disapproval toward ethical matters. 101By mid-century (1949), Stewart Hampshire laments the fact that moral philosophy has lost its way, (Hampshire 1949).Roderick Firth complains that just about every form of ethical analysis has been tried with no agreement, only more details and fragmentation (Firth 1952), and Elizabeth Anscombe in 1958 demands a halt to all moral theorizing until further developments in the human sciences can accommodate a theoretical moral consensus (Anscombe and Margaret 1958). 102t present, the secularism of the moral project is the dominant view of the Academy in metaethics.Secularism has institutional clout.However, it is distributed throughout a dizzying cacophony of differing metaethical theories and proposals.The common denominator is the rejection of explicit Theism.Naturalistic ethics continue to have a powerful influence on secular ethics, but new-wave secular moral non-naturalism is certainly in an ascendant position.Theistic metaethics is also a reemerging position, as we have briefly shown, and the moral argument for the existence of God is charting new territory with or without recognition by secularists.However, the secular moral project has not abated since Mill, Sidgwick, Moore, and beyond, and 21st century secular metaethics shows no signs of diminishing.Engaging with this wide and diverse panoply of secular thinkers and the secular moral project is one of the most urgent features of the resurgent Theistic moral project.Theistic moral philosophers are clearly required to undertake their work, given the conditions and challenges of secularization.Going forward, however, both secularist and Theistic thinkers in metaethics will have to develop a more comprehensive metaphysics of Reality that enables them to fill in and adequately support their metaethical thinking.In this author's opinion, for both the secular and Theistic moral projects, this is where the most pressing philosophical and theological challenges lie for metaethics.Breitenbach is correct in judging the impact of Kant's moral argument.He observes that "Kant's argument made an impact on the landscape of moral philosophy by forcing those who came after him to consider what implications atheism would have for the rationality of following the moral law".Zachary Breitenbach (2021), "Evaluating the Theistic Implications of the Kantian Moral Argument that Postulating God Is Essential to Moral Rationality", Studies in Christian Ethics 34, no.2: 149.7 See (Dougherty et al. 2018, p. 447) In this interview Alvin Plantinga states that he thinks the moral argument for God's existence to be "the most compelling".8 As Alastair MacIntyre puts the matter, "To be a theist is to understand every particular as, by reason of its finitude and its contingency, pointing towards God. ...It is to believe that, if we try to understand finite particulars independently of their relationship to God, we are bound to misunderstand them" (MacIntyre 2011, p. 23).For that matter, it could also be polytheistic, or pantheistic, to point out some other options.As Dallas Willard rightly points out, "nonnaturalism has been the rule and not the exception in ethical theory".(Willard 2018, p. 114).11 Many of the books of these thinkers grew out of a presentation of the Gifford lectures.The Gifford Lectures were established in 1887 to focus on issues related to natural theology.God and the moral order has been a central theme in natural theology. 12 The first order moral, ethical and the normative are taken to be roughly equivalent throughout this article.13 (Taylor 2007, p. 25) Taylor takes almost 900 pages to work out this question.Taylor also notes the move to atheism by the intermediary stage of deism, ibid., p. 293.15 I am using this as a socio-cultural concept and not a metaphysical possible worlds concept.16 As Taylor describes this, "(the) buffered self is the agent that no longer fears demons, spirits, magic forces".Ibid., p. 135.See also his discussion on pp.300-1.17 This "frame" is part of what he terms "secularity 3", which is "not ususally, or even mainly a set of beliefs which we entertain about our predicament", but instead "the sensed context in which we develop our beliefs".p. 549 (emphasis original). this trend, see Baggett and Walls 2016; Kratt 2023; Parrish forthcoming). 85There are few defenders of the naturalistic fallacy or the classical open question argument as Moore formulated these concepts, but there has been a revival of secular moral non-naturalism (Regan 2003; Tucker 2018). 86 Lewis has been the most widely read and influential writer of the moral argument in the 20th and 21st centuries.Mere Christianity continues to gain in popularity.It has sold over 3.5 million copies since the early 2000s. 6 9 This author has recently completed a PhD dissertation (2023) entitled A Theistic Critique of Secular Moral Nonnaturalism © that fully critiques the secular moral Platonism of David Enoch, Russ Shafer-Landau, Eric Wielenberg, Michael Huemer and Christopher Kulp.The dissertation also develops a distinctive version of the moral argument for the existence of God. 10 14
13,664
2023-07-29T00:00:00.000
[ "Philosophy" ]
Wiman and Arima theorems for quasiregular mappings Generalizations of the theorems of Wiman and of Arima on entire functions are proved for spatial quasiregular mappings. Main Results It follows from the Ahlfors theorem that an entire holomorphic function f of order ρ has no more than 2ρ distinct asymptotic curves where r stands for the largest integer ≤ r. This theorem does not give any information if ρ < 1/2, This case is covered by two theorems: if an entire holomorphic function f has order ρ < 1/2 then lim sup r → ∞ min |z| r |f z | ∞ Wiman 1 and if f is an entire holomorphic function of order ρ > 0 and l is a number satisfying the conditions 0 < l ≤ 2π, l < π/ρ, then there exists a sequence of circular arcs {|z| r k , θ k ≤ arg z ≤ θ k l}, r k → ∞, 0 ≤ θ k < 2π, along which |f z | tends to ∞ uniformly with respect to arg z Arima 2 . Below we prove generalizations of these theorems for quasiregular mappings for n ≥ 2. The next two theorems are generalizations of the theorems of Wiman and of Arima for quasiregular mappings on manifolds. The proofs of these results are based upon Phragmén-Lindelöf's and Ahlfors' theorems for differential forms of WT-classes obtained in 3 . For n-harmonic functions on abstract cones, similar theorems were obtained in 4 . Our notation is as in 3, 5 . We assume that the results of 3 are known to the reader and we only recall some results on qr-mappings. The quantity is called the maximal dilatation of F and if K F ≤ K, then the mapping F is called Kquasiregular. If F : M → N is a quasiregular homeomorphism, then the mapping F is called quasiconformal. In this case, the inverse mapping F −1 is also quasiconformal in the domain F M ⊂ N and K F −1 K F . Let A and B be Riemannian manifolds of dimensions dim A k and dim B n − k, 1 ≤ k < n, and with scalar products , A , , B , respectively. The Cartesian product N A×B has the natural structure of a Riemannian manifold with the scalar product We denote by π : A × B → A and η : A × B → B the natural projections of the manifold N onto submanifolds. If w A and w B are volume forms on A and B, respectively, then the differential form w N π * w A ∧ η * w B is a volume form on N. Theorem 2.1 see 5 . Let F : M → N be a quasiregular mapping and let f π • F : M → A. Then the differential form f * w A is of the class WT 2 on M with the structure constants p n/k, ν 1 ν 1 n, k, K O , and ν 2 ν 2 n, k, K O . Let D be an unbounded domain in R n and let f f 1 , f 2 , . . . , f n : D → R n be a quasiregular mapping. We assume that f ∈ C 0 D . It is natural to consider the Phragmén-Lindelöf alternative under the following assumptions: Several formulations of the Phragmén-Lindelöf theorem under various assumptions can be found in 7-11 . However, these results are mainly of qualitative character. Here we give a new approach to Phragmén-Lindelöf type theorems for quasiregular mappings, based on isoperimetry, that leads to almost sharp results. Our approach can be used to prove Phragmén-Lindelöf type results for quasiregular mappings of Riemannian manifolds. Let N be an n-dimensional noncompact Riemannian C 2 -manifold with piecewise smooth boundary ∂N possibly empty . A function u ∈ C 0 N ∩ W 1 n,loc N is called a growth function with N as a domain of growth if i u ≥ 1, ii u | ∂N 1 if ∂N / ∅, and sup y∈N u y ∞. Inequalities and Applications 5 We consider a quasiregular mapping f : M → N, f ∈ C 0 M ∪ ∂M , where M is a noncompact Riemannian C 2 -manifold, dim M n, and ∂M / ∅. We assume that f ∂M ⊂ ∂N. Journal of In what follows, we mean by the Phragmén-Lindelöf principle an alternative of the form: either the function u f m has a certain rate of growth in M or f m ≡ const. By choosing the domain of growth N and the growth function u in a special way, we can obtain several formulations of Phragmén-Lindelöf theorems for quasiregular mappings. In view of the examples in 7 , the best results are obtained if an n-harmonic function is chosen as a growth function. In the case a , the domain of growth is N {y y 1 , . . . , y n ∈ R n : y 1 ≥ 0} and as the function of growth, it is natural to choose u y y 1 1; in the case b , the domain N is the set {y y 1 , . . . , y n ∈ R n : Exhaustion Functions Below we introduce exhaustion and special exhaustion functions on Riemannian manifolds and give illustrating examples. Exhaustion Functions of Boundary Sets Let h : Let h : M → R be a locally Lipschitz function such that there exists a compact K ⊂ M with |∇h x | > 0 for a.e. m ∈ M \ K. We say that the function h is an exhaustion function for a boundary set Ξ of M if for an arbitrary sequence of points m k ∈ M, k 1, 2, . . . , the function h m k → h 0 if and only if m k → ξ. It is easy to see that this requirement is satisfied if and only if for an arbitrary increasing sequence t 1 < t 2 < · · · < h 0 , the sequence of the open sets V k {m ∈ M : h m > t k } is a chain, defining a boundary set ξ. Thus the function h exhausts the boundary set ξ in the traditional sense of the word. The function h : M → 0, h 0 is called the exhaustion function of the manifold M if the following two conditions are satisfied: {B h t k } generates an exhaustion of M, that is, Special Exhaustion Functions Let M be a noncompact Riemannian manifold with the boundary ∂M possibly empty . Let A satisfy 3.2 and 3.3 and let h : M → 0, h 0 be an exhaustion function, satisfying the following additional conditions: Here dH n−1 is the element of the n − 1 -dimensional Hausdorff measure on Σ h . Exhaustion functions with these properties will be called the special exhaustion functions of M with respect to A. In most cases, the mapping A will be the p-Laplace operator 3.8 and, unless otherwise stated, A is the p-Laplace operator. Since the unit vector ν ∇h/|∇h| is orthogonal to the h-sphere Σ h , the condition a 2 means that the flux of the vector field A m, ∇h through h-spheres Σ h t is constant. In the following, we consider domains D in R n as manifolds M. However, the boundaries ∂D of D are allowed to be rather irregular. To handle this situation, we introduce A, h -transversality property for M. Let h : M → 0, h 0 be a C 2 -exhaustion function. We say that M satisfies the A, htransversality property if for a.e. t 1 , t 2 , h < t 1 < t 2 < h 0 , and for every ε > 0, there exists an open set with piecewise regular boundary such that where v is the unit inner normal to ∂G. be a cylinder with base D. The function h : 0, ∞ → R, h x x 3 , is an exhaustion function for M. Since every domain D in R 2 can be approximated by smooth domains D from inside, it is easy to see that for 0 < t 1 < t 2 < ∞ the domain G D × t 1 , t 2 can be used as an approximating domain G ε t 1 , t 2 . Note that the transversality condition 4.8 is automatically satisfied for the p-Laplace operator A m, ξ |ξ| p−2 ξ. . . , x n be an orthonormal system of coordinates in R n , 1 ≤ n < p. Let D ⊂ R n be an unbounded domain with piecewise smooth boundary and let B be a p − n -dimensional compact Riemannian manifold with or without boundary. We consider the manifold M D × B. We denote by x ∈ D, b ∈ B, and x, b ∈ M the points of the corresponding manifolds. Let π : D × B → D and η : D × B → B be the natural projections of the manifold M. Assume now that the function h is a function on the domain D satisfying the conditions b 1 , b 2 , and 3.8 . We consider the function h * h • π : M → 0, ∞ . We have 4.12 Journal of Inequalities and Applications 9 Because h is a special exhaustion function of D, we have div |∇h * | p−2 ∇h * 0. 4.13 Let x, b ∈ ∂M be an arbitrary point where the boundary ∂M has a tangent hyperplane and let ν be a unit normal vector to ∂M. If x ∈ ∂D, then ν ν 1 ν 2 where the vector ν 1 ∈ R k is orthogonal to ∂D and ν 2 is a vector from T b B . Thus 14 because h is a special exhaustion function on D and satisfies the property b 2 on ∂D. If b ∈ ∂B, then the vector ν is orthogonal to ∂B × R n and The other requirements for a special exhaustion function for the manifold M are easy to verify. Therefore, the function is a special exhaustion function on the manifold M D × B. Example 4.5. We fix an integer k, 1 ≤ k ≤ n, and set 4.17 It is easy to see that |∇d k x | 1 everywhere in R n \ Σ 0 , where Σ 0 {x ∈ R n : d k x 0}. We shall call the set a k-ball and the set We shall say that an unbounded domain D ⊂ R n is k-admissible if for each t > inf x∈D d k x , the set D ∩ B k t has compact closure. It is clear that every unbounded domain D ⊂ R n is n-admissible. In the general case, the domain D is k-admissible if and only if the function d k x is an exhaustion function of D. It is not difficult to see that if a domain D ⊂ R n is k-admissible, then it is l-admissible for all k < l < n. Fix 1 ≤ k < n. Let Δ be a bounded domain in the n − k -plane x 1 · · · x k 0 and let The domain D is k-admissible. The k-spheres Σ k t are orthogonal to the boundary ∂D and therefore ∇d k , ν 0 everywhere on the boundary. Let h φ d k where φ is a C 2 -function with φ ≥ 0. We have ∇h φ ∇d k and since |∇d k | 1, we obtain 4.23 From the equation is a special exhaustion function for the manifold M. Therefore, for p ≥ n, the given manifold has p-parabolic type and for p < n, p-hyperbolic type. Example 4.9. Let r, θ , where r ≥ 0, θ ∈ S n−1 1 , be the spherical coordinates in R n . Let U ⊂ S n−1 1 be an arbitrary domain on the unit sphere S n−1 1 . We fix 0 ≤ r 1 < r 2 < ∞ and consider the domain with the metric where α r , β r > 0 are C 0 -functions on r 1 , r 2 and dl θ is an element of length on S n−1 1 . The manifold M D, ds 2 M is a warped Riemannian product. In the cases, α r ≡ 1, β r 1, and U S n−1 the manifold M is isometric to a cylinder in R n 1 . In the cases, α r ≡ 1, β r r, U S n−1 the manifold M is a spherical annulus in R n . The volume element in the metric 4.30 is given by the expression dσ M α r β n−1 r dr dS n−1 1 . 4.31 If φ r, θ ∈ C 1 D , then the length of the gradient ∇φ in M takes the form where ∇ θ φ is the gradient in the metric of the unit sphere S n−1 1 . For the special exhaustion function h r, θ ≡ h r , 3.8 reduces to the following form: ii if h 0 < ∞, the set ξ has p-hyperbolic type. 4.38 Journal of Inequalities and Applications 13 is a quantity independent of t > h K sup{h m : m ∈ K}. Indeed, for the variational problem 3, 2.9 , we choose the function ϕ 0 , ϕ 0 m 0 for m ∈ B h t 1 , 4.40 Because the special exhaustion function satisfies 3.8 and the boundary condition a 2 , one obtains for arbitrary τ 1 , τ 2 4.41 Thus we have established the inequality By the conditions, imposed on the special exhaustion function, the function ϕ 0 is an extremal in the variational problem 3, 2.9 . Such an extremal is unique and therefore the preceding inequality holds as an equality. This conclusion proves 4.37 . If h 0 ∞, then letting t 2 → ∞ in 4.37 we conclude the parabolicity of the type of ξ. Let h 0 < ∞. Consider an exhaustion {U k } and choose t 0 > 0 such that the h-ball B h t 0 contains the compact set K. Set t k sup m∈∂U k h m . Then for t k > t 0 , we have and hence lim inf 4.44 and the boundary set ξ has p-hyperbolic type. Wiman Theorem Now we will prove Theorem 1.1. Fundamental Frequency Let U ⊂ Σ h τ be an open set. We need further the following quantity: where the infimum is taken over all functions ϕ ∈ W 1 p U with supp ϕ ⊂ U By the definition, ϕ is a W 1 p -function on an open set U, if ϕ belongs to this class on every component of U. . Here ∇ 2 ϕ is the gradient of ϕ on the surface Σ h τ . In the case |∇h| ≡ 1, this quantity is well-known and can be interpreted, in particular, as the best constant in the Poincaré inequality. Following 14 , we shall call this quantity the fundamental frequency of the rigidly supported membrane U. Observe a useful property of the fundamental frequency. We also need the following statement. Lemma 5.2. Under the above assumptions for a.e. τ ∈ 0, h 0 , we have where λ p is the fundamental frequency of the membrane Σ h τ defined by formula 5.1 and 5.11 For the proof, see Lemma 4.3 in 10 . We now use these estimates for proving Phragmén-Lindelöf type theorems for the solutions of quasilinear equations on manifolds. 5.13 Then either f m ≤ 0 everywhere on M or In particular, if h is a special exhaustion function on M, then Here where O τ {m ∈ O : τ < h m < τ 1}. By Lemma 5.2, the following inequality holds 5.21 Thus using the requirement 3.3 for 3.4 , we arrive at the estimate Further we observe that from the condition f m > f m 1 |∇h| f m ∇f m p−1 * 1. 5.23 From this relation, we arrive at 5.14 . The proof of 5.15 is carried out exactly in the same way by means of the inequality 3, 5.75 . In order to convince ourselves of the validity of 5.16 , we observe that by the maximum principle we have |∇h m | p * 1. 5.24 But h is a special exhaustion function and therefore by 4.37 we can write where J is a number independent of τ. The relation 5.15 implies then that 5.16 holds. Example 5.4. Let A be a compact Riemannian manifold with nonempty piecewise smooth boundary, dim A k ≥ 1, and let M A × R n , n ≥ 1. Choosing as a special exhaustion function of M the function h a, x , defined in Example 4.8, we have 5.26 Then using the fact that h a, x | Σ h t t, we find for p n p − n p − 1 t 1−n / p−n for p / n. Journal of Inequalities and Applications Therefore, on the basis of 5.1 we get 5.29 where dσ A is an element of k-dimensional area on A. Therefore, and we obtain where the infimum is taken over all functions ψ ψ a, x with ψ a, x ∈ W 1 p A × S n−1 1 , ψ a, x | a∈∂A 0, ∀x ∈ S n−1 1 . 5.32 In the particular case n 1, Theorem 5.3 has a particularly simple content. Here h x is a function of one variable, and Σ h t A × S 0 t is isometric to Σ h 1 . Therefore, h t ≡ 1 and by 5.31 we have 5.33 In the same way, 5.16 can be written in the form Let n ≥ 2. We do not know of examples where the quantity 5.31 had been exactly computed. Some idea about the rate of growth of the quantity M τ in the Phragmén-Lindelöf alternative can be obtained from the following arguments. Simplifying the numerator of 5.31 by ignoring the second summand, we get the estimate 5.35 For each fixed x ∈ S n−1 1 , the function ψ a, x is finite on A, because from the definition of the fundamental frequency it follows that Proof of Theorem 1.1 We assume that 22 Journal of Inequalities and Applications Example 5.6. As the first corollary, we shall now prove a generalization of Wiman's theorem for the case of quasiregular mappings f : M → R n where M is a warped Riemannian product. For 0 ≤ r 1 < r 2 ≤ ∞, let D m r, θ ∈ R n : r 1 < r < r 2 , θ ∈ S n−1 1 5.53 be a ring domain in R n and let M r 1 , r 2 × S n−1 1 be an n-dimensional Riemannian manifold on D with the metric is a special exhaustion function on M. Let f : M → R n be a quasiregular mapping. We set u y log |y|. This function is a subsolution of 3.4 with p n and also satisfies all the other requirements imposed on a growth function. 5.57 Therefore, the requirement 1.1 on the manifold will be fulfilled if In this way, we get the following corollary. 5.63 We assume that in Example 5.8 the quantities r 1 0, r 2 ∞, and the functions α r ≡ 1, β r r, that is, the manifold is R n . As the special exhaustion function, we choose h log |x|. This function satisfies 3.6 with p n and ν 1 ν 2 1. The condition 5.58 for the manifold is obviously fulfilled. The condition 5.62 attains the form 5.65 We have the following corollary. Asymptotic Tracts and Their Sizes Wiman's theorem for the quasiregular mappings f : R n → R n asserts the existence of a sequence of spheres S n−1 r k , r k → ∞, along which the mapping f x tends to ∞. It is possible to further strengthen the theorem and to specify the sizes of the sets along which such a convergence takes place. For the formulation of this result it is convenient to use the language of asymptotic tracts discussed by MacLane 15 . Tracts Let Let f : M → R n be a quasiregular mapping having a point a ∈ R n as a Picard exceptional value, that is, f m / a and f m attains on M all values of B a, r \ {a} for some r > 0. The set {∞} ∪ {a} has n-capacity zero in R n and there is a solution g y in R n \ {a} of 3.4 such that g y → ∞ as y → a or y → ∞ cf. 12, Chapter 10, polar sets . As the growth function on R n \ {a}, we choose the function u y max 0, g y . It is clear that this function is a subsolution of 3.4 in R n \ {a}. The there exists at least one M s having a nonempty intersection with f −1 B a, r . Then by the maximum principle for subsolutions, such a component cannot be relatively compact. Letting s → ∞, we find an asymptotic tract {M s }, along which a quasiregular mapping tends to a Picard exceptional value a ∈ R n . Because one can find in every asymptotic tract a curve Γ along which u f m → ∞, we obtain the following generalization of Iversen's theorem 16 . Theorem 6.1. Every Picard exceptional value of a quasiregular mapping f : M → R n is an asymptotic value. The classical form of Iversen's theorem asserts that if f is an entire holomorphic function of the plane, then there exists a curve Γ tending to infinity such that f z −→ ∞ as z −→ ∞ on Γ. 6.4 We prove a generalization of this theorem for quasiregular mappings f : M → N of Riemannian manifolds. The following result holds. Proof of Theorem 1.2 We fix a growth function u and a special exhaustion function h as in Section 4. Let f : M → N be a nonconstant quasiregular mapping. We set M τ max h m τ u f m . 6.7
5,360.6
2010-02-12T00:00:00.000
[ "Mathematics" ]
THE DEVELOPMENT OF QUIZZIZ ASSISTED THINK-TALK-WRITE APPROACH TEACHING MATERIALS TO IMPROVE STUDENTS PROBLEM SOLVING SKILLS ON SOCIAL ARITHMETIC ABSTRACT INTRODUCTION Students' mathematical problem-solving abilities must be improved in learning activities, because in the process of learning and solving a problem, students can gain experience using the knowledge and skills they already have to apply in solving mathematical problems.Given the importance of mathematics, mathematics cannot be separated from its role in all aspects of life.As stated by Cornelius that mathematics is a means to solve the problems of everyday life.However, in reality students often experience difficulties in associating material with real life problems.Student understanding is still abstract and has not touched practical needs and their application in real life.roblem solving is one of the goals in the learning process in terms of curriculum aspects.The importance of problem solving in learning was also conveyed by the National Council of Mathematics Teachers (NCTM).According to NCTM (2000) the process of thinking mathematics in learning mathematics includes five main competency standards, namely problem solving abilities, reasoning abilities, connection skills, communication skills and representational abilities.This low ability will result in low quality of human resources as indicated by low problem solving abilities.This is because so far learning has not provided opportunities for students to develop their ability to solve problems. One form of teaching materials that can be developed by researchers is teaching materials assisted by the quizizz application.The development of teaching materials is expected to improve the processes and results of students' mathematical problem solving abilities.Students can freely collect information related to mathematical problem solving abilities, both from media images provided by the teacher and by gathering information from various sources.Quizizz is a game-based educational application, which brings multiplayer into the classroom, is online and fun, real time, and learning outcomes can be downloaded immediately.Quizizz can be used as an alternative learning evaluation for students, and can also be monitored for item analysis.Quizizz can be accessed through electronic devices owned by students.Quizizz also allows students to compete with each other, provides motivation in learning, sees live leaderboards, helps stimulate interest, has a very attractive appearance, and has a time setting that can guide student concentration.The use of quizizz assists teachers in carrying out learning outcomes assessment activities without being limited by place. Education is one important aspect that will determine the quality of life of a person or a nation.In formal education, one of the subjects in schools that can be used to build students' way of thinking is mathematics.Therefore, mathematics lessons at school do not only emphasize giving formulas but also prohibit students from being able to solve various math problems related to everyday life (Samin, 2018).Mathematics is a field of science that is very important for human life.Formally mathematics is taught from elementary school to university, this is because mathematics is indeed a subject that students must master because it has great benefits in life, especially in improving the human mindset.Learning in schools refers to the applicable curriculum with predetermined learning goals and is expected to be achieved by all students, including in learning mathematics (Sani, 2017).Manuscripts submitted to this journal must be subject to the following titles, except for review articles: Title; Writer's name; Writers' Association; Abstract; Keywords; Introduce; method ; Results and Discussion; detect; Thank You ; and References. Teaching materials are one of the most important elements in curriculum implementation.With teaching materials, the time needed during the learning process will be more efficient.Majid stated.Teaching materials are all types of materials used to assist teachers or instructors in carrying out teaching and learning activities.(Reza, 2020).Subandiyah (Khuzaemah & Ummi, 2019), explains that teaching materials are anything that is used by teachers or students to facilitate language learning, increase knowledge and language experience.Another definition states that teaching materials are a collection of materials that are arranged systematically so as to create an environment or atmosphere that allows students to learn. 96 The Think Talk Write (TTW) learning model was first introduced by Huinker and Laughlin in 1996.The Think Talk Write (TTW) learning model is based on constructivism learning which is applied through thinking, speaking and writing activities.Huinker and Laughlin (Hamdayana, 2020, p. 217) state that the Think Talk Write (TTW) learning model process can build understanding through thinking, speaking, and sharing ideas (sharing) with friends before writing.This is in line with Suyanto's opinion (Chandra, et al. 2018, p. 36) that this learning begins with thinking through reading material (listening, criticizing, and alternative solutions).Reading results are communicated through presentations, discussions, then reports on the results of the presentation.The essence of the Think Talk Write (TTW) learning model is a constructivist learning design through self-communication activities, between students and teachers which encourage students to think, speak, express opinions, and write down the results. According to Isrok'atun, et al. (2018) in learning mathematics, the Think Talk Write (TTW) learning model is applied through three mathematical abilities, namely thinking mathematically, speaking mathematically, and writing mathematically.Mathematical thinking is applied by understanding an event or mathematical problem.This math problem is packaged in life problems.This speaking ability is applied when students verbally express various mathematical ideas based on their knowledge.Students express mathematical ideas using their own language.Furthermore, writing skills are applied by directing students to express the mathematical ideas they have acquired, in written form using the language of mathematics, namely symbols or concepts and mathematical rules. Quizizz according to Rizal Dzul Fadly, (2020, p. 72) is a web tool that can be used for learning in the classroom and outside the classroom in the form of homework (PR), which can also be used as an interactive quiz game.Quizizz is a game-based educational application, which brings multiplayer into the classroom, is online and fun, real time, and learning outcomes can be downloaded immediately.Quizizz can be used as an alternative learning evaluation for students, and can also be monitored for item analysis.Quizizz can be accessed through electronic devices owned by students.Quizizz also allows students to compete with each other, provides motivation in learning, sees the leaderboard directly, helps stimulate interest, has a very attractive appearance, and has a time setting that can guide student concentration.The use of quizizz assists teachers in carrying out learning outcomes assessment activities without being limited by place.Research conducted using this model is limited to step 3 (Define, Design, Develop) due to limited research time.The interview process was carried out before conducting research on teaching materials.This aims to obtain data regarding the material that must be taught, the learning objectives to be achieved, and the media that students have used so far.This research also requires information whether school facilities support the learning process.The source of information for this interview was a mathematics teacher, a curriculum representative for the AL-Fatih Integrated Junior High School.Expert validation, validation of learning outcomes test questions, and manual results for expert use.The validator is a lecturer in Mathematics Education Study Program, FPMS IKIP SILIWANGI and a mathematics teacher at AL-Fatih Integrated Junior High School METHOD The questionnaire method is a measuring tool for measuring student responses after learning with the quiziz application.The questionnaire instrument for this method consists of the ease of understanding the material, the level of enjoyment and saturation of students using learning media, then repeating learning and the level of student motivation after using learning media.Data processing procedures for all research data were collected using Microsoft Excel in the form of: 1) Descriptive statistics to describe the stages of the development process and constraints during development, 2) Inferential statistics to see the feasibility of product effectiveness.Perform data processing analysis which will later be used to formulate research results.The results of this analysis are the answers to existing problems.This in-depth data processing analysis is the result of expert validation of the learning media assessment evaluation instrument.Analyzing the data from the validation results of the expert team using a Likert scale.The percentage of validation results is calculated using the following equation: The product can be used immediately without repair 61% -80% Worthy The product can be used with minor repairs 41% -60% Decent Enough Products can be used with many improvements 21% -40% Not Yet Eligible Products can be used with many improvements 0% -20% So inadequate Product cannot be used Results The results of this study aim to develop teaching materials for social arithmetic learning made with the help of the quiziz application.In addition to producing interactive quiz applications, researchers also want to see the feasibility of teaching materials used in the learning process. The results of research and development with the 4D model limited to step 3 (Define, Design, Develop) are explained as follows namely. Define The analysis carried out in the seventh grade junior high school mathematics curriculum is about social arithmetic which will be designed using the quiziz application.After careful analysis, the material that can be developed is social arithmetic material, because social arithmetic material must be explained with reasoning and understanding.To determine the overall value, determine the profit and loss on the sale.Interesting teaching materials need to be developed so that students can understand concepts because students will not be able to use visual aids enough because most students find it difficult to understand the steps that must be taken with the teaching materials that will be developed will make students better understand social arithmetic material.This material requires an in-depth explanation of students' mathematical problem solving abilities.Because students like technology such as computers and cellphones, students themselves have a very high curiosity.With the quiziz application-based learning material, students will be interested in learning and will not feel bored with the learning material that will be delivered.The teacher's opinion about teaching materials assisted by the quiziz learning application is quite good, because it can help teachers in the teaching and learning process to be easier and able to build student learning motivation, it's just that there are several obstacles faced, Journal of Innovative Mathematics Learning Volume 7, No. 1, March 2024 pp 94-105 99 namely limited time in the very little learning process, school facilities which are inadequate are very inadequate in implementing learning in class and there are some students who do not have mobile phones so that learning is a little hampered. An analysis of students' needs was carried out to get the result that teaching materials are truly effective in learning mathematics.The results obtained from students are that students are students with all different characters and abilities.The learning objectives to be achieved by the teacher should pay attention to the needs of students according to their character.The teacher is not solely the dominant party in controlling learning in the classroom because each student has sensitivity in learning, curiosity, the ability to express opinions and learning needs that attract his attention so that students' interest in learning grows.The abilities possessed by these students and sensitivity to learning demand learning teaching materials.Observations made on learning at the Al-Fatih Integrated Junior High School show that during the lesson the teacher does not use teaching materials. Design The design at this stage is carried out by designing teaching materials while the steps for designing teaching materials are as follows: 1) Front Page of Teaching Materials.The front page of this teaching material contains a login page for teaching materials in general and there are several links that can be used to verify Quiziz account logins, so that students who have accounts can access these teaching materials.2) Advanced login page for the use of teaching materials.In this section the processing of teaching materials contains guide steps to help students access teaching materials that will be used during the learning process.3) Materials and activities.On this part of the material page is intended as students' basic knowledge in understanding social arithmetic material, for the learning media used, namely teaching materials assisted by the quiziz application in which there are experiments on the think talk write approach, not only that the material and activities are arranged with the characteristics of the think-talk approach.talk-write.After that, students will be given problem-solving activities related to the material to be studied. Figure 2. Learning Media Design After finishing developing the teaching materials using the Quiziz application and before the teaching materials were carried out on a small scale trial, an assessment or validation of the eligibility was carried out from media experts, material experts, and mathematics teachers at Al-Fatih Integrated Junior High School, the teaching materials became a reference for making improvements.This process is very useful before the product is shown to students at the field test stage.The assessment was carried out with media experts to determine the feasibility of 100 think-talk-write teaching materials assisted by the quiziz application.Aspects assessed include appearance, content and benefits. Develop The development of the 4D model which is limited to step 3 (Define, Design, Develop) contains activities to realize product design, in this case think-talk-write teaching materials assisted by the quiziz application.The manufacturing step in this research is to create and modify teaching materials.At the design stage, a flowchart has been made and realized in the form of teaching material development products that are ready to be implemented in accordance with the objectives of developing teaching materials.To find out the feasibility of the teaching materials developed by this researcher, the researcher asked for assessments from material experts, medical personnel and one of the field practitioners in this study, namely the Gulrul Observer at the school.In addition, the most important thing in the validation process by experts is to re-ensure that the teaching material products assisted by the Quliziz application can indeed solve problems effectively and efficiently.Assessment findings are compiled and presented in tabular form.Validation of RPP, LKPD, TEACHING MATERIALS and TEACHER.The validation results of lesson plans, worksheets, teaching materials and teachers were carried out by validators I and validators II, namely Wahyu Setiawan, M.Pd and Tubagus Suwanda, S.Pd.The results of media validation by media experts showed that out of 15 statements given a total of 67, the average validator's overall rating was 4.46, the validator's percentage score was 89%, and included in the very valid category with a score interval of 80% -100%. After validating material experts and media experts, it can be seen that the feasibility of the teaching materials developed is think-talk-write teaching materials with the help of the quiziz application, along with their recapitulation in table 5. Based on Table 6 it can be seen that the total score obtained was 70 with a percentage of 87.5% included in the very valid category of the developed teaching materials. After being validated by experts and teaching staff (practitioners), then a small-scale trial was carried out.This small-scale trial was used to find out and obtain results on the quality, practicality and usability of the teaching materials developed by researchers.The small-scale trial was carried out by 10 class VII students of Al Fatih Terpatu Middle School who had finished attending the Social Arithmetic material.The selection of students is arranged randomly, accompanied by the results of a student questionnaire regarding the assessment of teaching materials assisted by the Quizizz application.Based on the results of the questionnaire on responses, it shows that with reference to practicality criteria in small-scale trials, active student responses to the development of teaching materials presented with the help of the quiziz application with a percentage result of 82% can be declared "very practical".Apart from this limited trial, not many revisions were found, some students were very interested in using this teaching material.As for the improvements made, such as changing the layout and replacing images on teaching materials that are still blurry.Furthermore, after the small-scale trial the researcher made minor improvements to the content of the teaching materials, the next step the researcher took was to 102 conduct a trial on a larger scale.This large-scale trial was conducted on 30 grade VII students of SMP Al Fatih Terpadu, and the results of this extensive trial are as follows: Based on the results of the questionnaire on responses, it was shown that with reference to the practicality criteria in the limited trial the active student responses to the development of teaching materials were presented with the help of the Quizizz application with a percentage result of 84% and can be said to be "very practical" and in this broad trial not many revisions were found.The following is a recapitulation in table 8. Based on the results of the recapitulation, it shows that with reference to practicality criteria in small and large trials the active student responses to the development of teaching materials are presented with the help of the quizizz application with a percentage result of 83% and can be stated as "very practical". Discussions Education is an important aspect that will determine the quality of life of a person or a country.In formal education, one of the school subjects that can be used to build students' way of thinking is mathematics.Therefore, school mathematics lessons do not only emphasize formulation, but also teach students to be able to solve various mathematical problems related to everyday life.(Samin, 2018).Researchers make the curriculum as a basic reference and main reference so that it can adapt to the contents of teaching materials.Not only that, there are important points that become a reference requirement for researchers when developing teaching materials. Researchers pay attention to these things, the same as what was stated by Emanuel (2021) who explains that the development of learning tools must pay attention to the requirements of important points so that after the finished product can be directly used as a tool.to support learning.Proceed to the stage of preparing teaching materials, learning devices and test instruments.This stage is a fairly time-consuming and time-consuming activity.The steps taken to make teaching materials started from preparing materials according to Core Competencies (I), Basic Competencies (KD), designing content schemes for teaching materials with the characteristics of a social arithmetic think-talk-write approach and practice questions to strengthen the depth of the material.With advances in application-based technology, Quizizz has become a design tool especially for making teaching materials.This makes the packaging on the display of teaching materials.The teaching materials presented Journal of Innovative Mathematics Learning Volume 7, No. 1, March 2024 pp 94-105 103 have very interesting quality pictures and illustrations, so that they become one of the factors to increase interest and enthusiasm for learning in the students themselves. All teaching material products developed before they are widely used and can be used during learning in schools, these products must immediately be checked for their feasibility by being validated by experts.So that the validity of the teaching materials that we have developed will be seen.This validation was carried out by IKIP Siliwangi lecturers and math practitioners or teachers at schools according to field criteria and abilities.These stages and implementation are in accordance with the opinion according to Nurdien (2019).In the process of developing teaching materials, the validation step by experts is an important sequence for research results. The validator is a reference for whether or not this teaching material is appropriate, for this reason this validation is carried out with the same validator and indicators.This is done to find out how far the development of material products on social arithmetic material.In the results of expert validation 1 and II, the "very feasible" category was extended with a few revisions.With the acquisition of these results researchers can conclude that student responses to the development of teaching materials get the "very practical" category used in learning mathematics. According to Imtiyas (2018) suggests that in order to determine the feasibility of the teaching materials being developed, it is necessary to carry out direct trials in the field to find out the results of student responses to the teaching materials that have been developed.After knowing that the teaching materials that have been developed by researchers are feasible to use and can be implemented during learning, researchers conduct product trials on students, so researchers can see when using teaching materials whether they can be effective or not during learning.Implementation of learning that had previously been consulted with the validator and math teacher at the school.During the learning process, students are able to follow and carry out instructions for working on teaching materials which include the TTW approach.The initial step of the researcher is to observe the problems presented in teaching materials using the Quizizz application, then try practical adjustments, then end by concluding. According to Sandinan, et al. (2020), argued that the steps of the TTW approach can produce students who can have the ability to actively seek, process, construct and use knowledge.Siwi & Puspaningtyas (2020) said that after developing teaching materials that had been validated and made improvements, the result was that students were able to understand the material well, and were able to explain the material back to other students.After learning is carried out according to the TTW approach method and the tools that have been prepared carefully, the final meeting of the lesson is a question test to see the effectiveness of learning using teaching materials assisted by the Quizizz application on student learning outcomes.ability to solve math problems.This step was taken because the questions given to students were arranged according to indicators of mathematical problem solving abilities.In line with that, the effectiveness test aims to find out how much influence teaching materials have on student learning outcomes (Ernawati, 2020). CONCLUSION Based on the results of the research that has been done, as well as looking at the existing problem formulation, the following conclusions can be drawn: The process of developing think-talk-write teaching materials with the help of the quiziz application to improve problemsolving abilities of class VII students is already in the "very feasible" criteria and "very effective" student responses to teaching materials are already in the criteria of "very practical" based on student learning outcomes obtained meaning that the teaching materials developed are effective for class VII students of junior high schools, the obstacles encountered during the process of developing teaching materials developed include: Time which is limited to learning and school facilities that are very inadequate in the implementation of learning in the classroom.Suggestions that researchers can put forward are as follows: 1) Learning by using teaching materials media assisted by the quizizz application can be used and can be developed by the teacher on an ongoing basis with different materials and themes.2) In making teaching materials there are several obstacles or difficulties that might be an improvement for other researchers to develop teaching materials with different methods.3) For future researchers, this research can be a reference for researchers to be able to develop other products, especially those that are useful in the field of education. This research was conducted at AL-Fatih Integrated Junior High School which is located in Cikallong Wetan District (West Bandung Regency).The subjects used in this study were class VII students, totaling 30 students.The time of the research was carried out for 3 days, starting on March 28-30 2023.The type of research used was development research (R&D) with a 4D development model (Define, Design, Develop, Disseminate). Table 1 . Product Validation Criteria Bahari & Setiawan.98:Totalscore of respondents' answers as a whole : Maximum total score overall 100%: Constant Benchmarks are used to present validation scores Student character analysis was carried out by interviewing class VII accompanying teachers.By conducting interviews with the class VII teacher at Al-Fatih Integrated Middle School, West Bandung, namely Mr. Tubagus Suwanda, S.Pd and interviews with several class VII students, two of whom were named Salwa and Mutia, the results of the interviews explained that teaching materials needed to be used to support the delivery of material to students in class.The method used by the teacher in learning mathematics is discussion, question and answer, group work, and teaching aids to help the teacher explain the material to students.But today's students are excited by technology. Table . 2 RPP validation resultsThe results of lesson plan validation by material experts and media experts show that of the 21 statements given there are 99, the average validator's overall rating is 4.71, the percentage of the validator's score is 94% and is included in the very valid category with an interval score of 80%.-100%.The results of the LKPD assessment are presented in Table3, namely: LKPD validation results by material experts and media experts found that out of 15 statements given a total of 67, the average validator's overall rating was 4.46, the validator's percentage score was 89%, and included in the very valid category with an interval score of 80% -100%.The results of the assessment of teaching materials are presented in Table4, namely: Table 4 . Results of Teaching Materials Validity Table 6 . Results of the Validity of Mathematics Teachers Table 7 . Student Response Results in Small-Scale Trials Table 8 . Student Response Results in Large-Scale Trials Table 9 . Summary of the results of small and large scale trials
5,765.2
2024-03-12T00:00:00.000
[ "Education", "Mathematics" ]
Highly efficient homology-directed repair using transient CRISPR/Cpf1-geminiviral replicon in tomato 12 Genome editing via homology-directed repair (HDR) pathway in somatic plant cells was 13 very inefficient compared to illegitimate repair by non-homologous end joining (NHEJ). 14 Here, compared to a Cas9-based replicon system, we enhanced approximately 3-fold in 15 the HDR-based genome editing efficiency via transient geminiviral replicon system 16 equipping with CRISPR/LbCpf1 in tomato and obtained replicon-free, but with stable 17 HDR alleles. Efficiency of CRISPR/LbCpf1-based HDR was significantly modulated by 18 physical culture conditions such as temperature or light. A ten-day incubation at 31 o C 19 under light/dark cycles after Agrobacterium-mediated transformation performed the 20 best among conditions tested. Further, we developed multi-replicon system which is a 21 novel tool to introduce effector components required for the increase of HDR efficiency. 22 Even if it is still challenging, we also showed a feasibility of HDR-based genome editing 23 without genomic integration of antibiotic marker or any phenotypic selection. Our work 24 may pave a way for transgene-free rewriting of alleles of interest in asexually as well as 25 sexually reproducing plants. 26 57 CRISPR/LbCpf1-based geminiviral replicon system highly enhanced HDR in tomato 58 To test the hypothesis, we re-engineered a Bean Yellow Dwarf Virus (BeYDV) replicon to 59 supply high doses of homologous donor templates, and used a CRISPR/LbCpf1 system 60 (Zetsche et al., 2015) for DSB formation ( Figure 1A and 1B). Selection of HDR events was 61 supported by a double selection/screening system using kanamycin resistance and 62 anthocyanin overproduction ( Figure 1A). Figure 1). The LbCpf1 system using two guide RNAs for targeting the ANT1 69 gene, a key transcription factor controlling anthocyanin pathway, showed the much higher 70 HDR efficiency, at 4.51±0.63 %, visible as purple calli and/or shoots ( Figure 1C and 1D), 71 compared to the other control constructs including a "minus Rep" (pRep -), "minus gRNA" 72 (pgRNA -), and comparable to a CRISPR/SpCas9-based construct (pTC217). The data 73 revealed that functional geminiviral replicons were crucial for the enhancement of HDR 74 efficiencies ( Figure 1C) as shown in other works (Čermák et al., 2015). This is the first report 75 showing highly efficient HDR in plants using Cas12a expressing from a geminiviral replicon. 76 Light conditions or photoperiods enhanced HDR efficiency of CRISPR/LbCpf1 system 77 Boyko and coworkers (2005) showed the strong impact of short-day conditions on 78 intrachromosomal recombination repair (ICR) in Arabidopsis. We tested if the same could be 79 true in tomato somatic cells. Using various lighting regimes, including complete darkness 80 (DD), short (8-h light/16-h dark; 8L/16D) and long (16L/8D) day conditions, we found that 81 HDR efficiencies achieved under short and long day conditions were higher than those in the 82 DD condition in the case of LbCpf1, but not SpCas9, and reached up to 6.62±1.29% (p<0.05, 83 Figure 1E). The advancement of LbCpf1-based HDR system might be explained by stress-84 responses of the host cells which rushed for maintenance of genome stability (Boyko et al.,85 2005) by any means of DNA repairs including HDR. 86 CRISPR/LbCpf1-based HDR was significantly higher compared to CRISPR/Cas9-based 87 system at high temperature 88 Temperature is an important factor controlling ICR (Boyko et al., 2005) and CRISPR/Cas9-89 based targeted mutagenesis in plants (LeBlanc et al., 2018) and CRISPR/Cpf1-based HDR To compete with the efficient NHEJ pathway, protein involving in the HDR pathway were 105 over-expressed, activated or enhanced leading to significant higher efficiencies (Ye et al., 106 2018;Pawelczak et al., 2018). For further improvement of our system, we used several 107 molecular approaches for HDR improvement in tomato. The first was to activate nine HDR 108 pathway genes (Supplemental Table 1) using the dCas9-sun tag/scFv-VP64 activation system 109 (Tanenbaum et al., 2014). A single construct system (pHR01-Activ, Supplemental Figure 2A) 110 showed negative effects on HDR (data not shown), which may be due to its large size (~32 kb 111 as T-DNA and ~27 kb as circularized replicon). 112 The size of viral replicons is inversely correlated with their copy numbers (Suarez-Lopez and 113 Gutierrez, 1997;Baltes et al., 2014). In this work we also tested a novel idea to use a T-DNA 114 producing multiple replicons (pHR01-MR, Figure 2A, and Supplemental Figure 2B). 115 Compared to pHR01, the construct showed HDR efficiencies with 39% increase. We also 116 confirmed the release of three replicons from a single vector (pHR01-MR) used in this work 117 ( Figure 2B). To our best knowledge, this is the first report that multiple replicons can be used 118 for efficient genome editing via HDR pathway. This multiple replicon system may also 119 provide more flexible choices for expressing multiple genes/genetic tools/DNA agents with 120 high copies in plant cells. 121 The true HDR events were obtained at high frequency 122 To verify the HDR repair events in the study, PCR analyses were conducted using primers 123 specific for the right (UPANT1-F1/NptII-R1) and left (ZY010F/TC140R) ( Figure 1A; Figure 5B). 144 The HDR allele was stably inherited in offspring by self-pollination as well as backcrossing 145 To confirm stable heritable edits, we grew Genome Edited generation 1 (GE1) plants ( Figure 146 2E) obtained from self-pollination of LbCpf1-based HDR GE0 events, and found segregating 147 population in purple phenotype (Supplemental Table 4) similar to data shown by Čermák and Table 2) or pTC217 (Supplemental Table 3). 251 HDR efficiencies were recorded in at least three replicates and statistically analyzed and plotted 252 using PRISM 7.01 software. In Figure 1C, multiple comparisons of the HDR efficiencies of the 253 other constructs with that of pRepwere done by one-way ANOVA analysis (Uncorrected 254 Fisher LSD test, n=3, df=2, t=4.4; 4.4 and 1.5 for pTC217; pHR01 and pgRNA -, respectively). 450 Step by step protocol is presented with each number in the circles indicates number of days after 451 seed sowing (upper panel) and treatments used in each steps are shown in below panel. showing perfectly edited HKT1;2 N217 to D217 allele with WT allele as a reference. The nucleotides highlighted in the discontinuous red boxes denote intended modifications for N217D; PAM and core sequences (to avoid re-cutting). (B) HDR construct layout for HKT1;2 editing. There is neither selection nor visible marker integrated into the donor sequence. The NptII marker was used for enrichment of transformed cells. (C) Morphology of the HKT1;2 N217D edited event compared to its parental WT in greenhouse conditions. Scale bar = 1 cm. Supplemental Figure 1 The de novo engineered geminiviral amplicon (named as pLSL.R.Ly) and its replication in tomato. (A) Map of pLSL.R.Ly. The DNA amplicon is defined by its boundary sequences (Long Intergenic Region, LIR) and a terminated sequence (Short Intergenic Region, SIR). The replication associated protein (Rep/RepA) is expressed from the LIR promoter sequence. All of the expression cassettes of HDR tools were cloned into the vector by replacing the red marker (Lycopene) using a pair of type IIS restriction enzyme (BpiI, flanking ends are TGCC and GGGA). Left (LB) and right (RB) denote the borders of a T-DNA. (B) Circulated DNA detection in tomato leaves infiltrated with pLSL.R.Ly compared to that of pLSLR. Agrobacteria containing the plasmids were infiltrated into tomato leaves (Hongkwang cultivar) and infiltrated leaf were collected at 6, 8 and 11 dpi and used for detection of circulated DNAs. N: water; P1: positive control for pLSL.R.Ly; positive control for P2: pLSLR; Cx: Control samples collected at x dpi; Ixy: infiltrated sample number y collected at x dpi; I11V: sample collected from leaves infiltrated with pLSLR at 11 dpi. PCRs using primers specific to GAPDH were used as loading control. Alignment of targeted regions isolated from the HKT12 events. 18/25 events (highlighted in yellow) showed strong double peaks indicating single/bi-allelic mutations. 6/25 events showed clearly bi-allelic mutations. C77 showed weak (30%) double peaks. C83 and C105 showed large truncations. Supplemental Figure 10 Timeline and contents of Agro-mediated transformation protocol used in this work. Step by step protocol is presented with each number in the circles indicates number of days after seed sowing (upper panel) and treatments used in each steps are shown in below panel.
1,888.6
2019-01-16T00:00:00.000
[ "Biology", "Environmental Science" ]
TYR as a multifunctional reporter gene regulated by the Tet-on system for multimodality imaging: an in vitro study The human tyrosinase gene TYR is a multifunctional reporter gene with potential use in photoacoustic imaging (PAI), positron emission tomography (PET), and magnetic resonance imaging (MRI). We sought to establish and evaluate a reporter gene system using TYR under the control of the Tet-on gene expression system (gene expression induced by doxycycline [Dox]) as a multimodality imaging agent. We transfected TYR into human breast cancer cells (MDA-MB-231), naming the resulting cell line 231-TYR. Using non-transfected MDA-MB-231 cells as a control, we verified successful expression of TYR by 231-TYR after incubation with Dox using western blot, cellular tyrosinase activity, Masson-Fontana silver staining, and a cell immunofluorescence study, while the control cells and 231-TYR cells without Dox exposure revealed no TYR expression. Detected by its absorbance at 405 nm, increasing concentrations of melanin correlated positively with Dox concentration and incubation time. TYR expression by Dox-induced transfected cells shortened MRI T1 and T2 relaxation times. Photoacoustic signals were easily detected in these cells. 18F-5-fluoro-N-(2-[diethylamino]ethyl)picolinamide (18F-5-FPN), which targets melanin, quickly accumulated in Dox-induced 231-TYR cells. These show that TYR induction of melanin production is regulated by the Tet-on system, and TYR-containing indicator cells may have utility in multimodality imaging. on reporter genes, using a probe that specifically binds to the gene product. Commonly used reporter gene products include the thymidine kinase produced by herpes simplex virus type 1 (HSV1-tk) and the sodium iodide symporter (NIS) labelled with radiopharmaceuticals 7-10 , green fluorescent protein (GFP) and firefly luciferase (Fluc) [11][12] , used in fluorescence imaging, and ferritin and tyrosinase 13,14 , used in MRI. The indirect strategy often needs to fuse two, three, or even more reporter genes into cells. In our previous experiments, a triple-fused reporter gene (HSV1-tk, GFP and Fluc) was prepared for PET, fluorescence and bioluminescence imaging 11 . Gene fusion processes are difficult. Linkers, the distance between reporter genes, and the orientation of each reporter gene are the key factors. This has inspired a search for simpler probes. Human tyrosinase (TYR), a key enzyme, catalyses the three most important steps in melanin production, which include oxidation of tyrosine to dopamine (DOPA), DOPA to dopaquinone, and 5, 6-dihydroxyindile to 5, 6-indolequinone 15 . Melanin production rate and yield correlate positively with TYR expression and activity 16 . After transduction of TYR into cells and encoding an active tyrosinase, melanin synthesis is activated. The advantage of melanin is its multiple properties that can be imaged with different modalities. Its wide absorption spectrum from the ultraviolet to near infrared enables its use in photoacoustic imaging 17,18 . Its affinity to iron can be as high as 16% of its own weight 19 . Ionised iron has high signal intensity on MRI T1-weighted images (T1WI), the intensity increasing with increasing ion concentration 14 . In addition, some studies have found that benzamide and its analogues specifically bind to melanin. Several radiopharmaceuticals, 125 I-BZA, and 123/131 I-IBZA (for SPECT imaging) have been developed for the diagnosis of melanoma 20,21 . Based on the same principle, some PET probes, such as (N-[2-(diethylamino) ethy1]-6-18 F-fluoropicolinamide) ( 18 F-MEL050), have demonstrated high and specific binding to melanin both in vitro and in vivo 22 . Another positron probe, 18 F-5-fluoro-N- (2-[diethylamino]ethyl)picolinamide ( 18 F-5-FPN), prepared by our group, has been shown to specifically target melanin in vitro and in vivo with high retention, affinity and favourable pharmacokinetics 23 . Potentially, using TYR, as a reporter gene, one could perform PAI, MRI, and PET or SPECT imaging. Previous studies have demonstrated that TYR can be used as a multifunctional reporter gene for PAI/ MRI or PAI/MRI/PET imaging both in vitro and in vivo 24,25 . In gene therapy and related gene studies 26 , it has been demonstrated that the timing and degree of gene expression with an activator substance is much better than the sustained expression of a gene product, as the sustained expression of exogenous genes or proteins may result in some unexpected adverse effects 27 . Since the advent of the Tet-off and Tet-on gene expression systems 28,29 , both have been widely used in various prokaryotic and eukaryotic models 30,31 . TYR, to act as a reporter gene, needs to be transfected and integrated into cells, and the Tet-on tetracycline gene induction system is widely used for inducible expression, as it can effectively control gene expression in vivo and in vitro using doxycycline (Dox) as the activator 27,32 . The system was evaluated in vitro under the control of Dox for providing the feasibility of multimodality imaging. Identification of tyrosine expression in different groups after Lenti-X tet-on 3G-TYR transduction. We successfully constructed the lentiviral vector Lenti-X Tet-On 3G-TYR, and selected a stable breast cancer cell line expressing TYR using puromycin. To measure the expression of the TYR gene in 231-TYR + Dox, 231-TYR and 231 cells, western blot was performed ( Fig. 2A). We found that Gene expression product tyrosinase, the key enzyme, catalyses the process in melanin production. Melanin then serves as a multifunctional target for photoacoustic imaging (PAI), positron emission tomography (PET) and magnetic resonance imaging (MRI) multimodal imaging. TYR was only successfully expressed in 231-TYR cells treated with Dox (231-TYR + Dox) and not in the control cells (231-TYR and 231 cells). Cellular tyrosinase activity was also assessed by measuring the amount of dopachrome. Figure 2B shows that the amount of dopachrome in 231-TYR + Dox cells increased over time, while no dopachrome was found in the control groups exposed to Dox. TYR activity in 231-TYR + Dox cells was significantly higher than that in the control cells (P < 0.05 for all time points). The 231-TYR + Dox, 231-TYR, and 231 cells were collected, and the melanin expression was estimated by visual inspection (Fig. 2C). An obvious black colour was visible in the 231-TYR + Dox cells, while the other cells just showed the colour of the culture medium. Melanin was also identified by Masson-Fontana silver staining, with coarse black particles only found in the 231-TYR + Dox cells (Fig. 2D). Results of cell immunofluorescence studies. To further assess the expression of TYR, we performed immunofluorescence experiments. The immunofluorescence results in Fig. 3 demonstrate that TYR products were expressed by the 231-TYR + Dox cells, and not by the control cells. Dox regulation of melanin production. We quantified the effect of Dox-induced TYR expression from the dosage and the duration of exposure to Dox in 231-TYR + Dox cells. As shown in Fig. 4A, the concentration of Dox and melanin yield was positively correlated, melanin production peaking at a concentration of Dox of 2000 ng/mL. Figure 4B displays the Dox-induced melanin yield in 231-TYR cells related to the length of time of Dox incubation, the melanin yield gradually increasing from 4 to 48 h, peaking at 48 h. Melanin began to decrease 4 h after the withdrawal of Dox and returned to normal levels at about 48 h (Fig. 4C). This suggests that Dox should be withdrawn in advance if we want to stop the effect of the reporter gene. Cell MRI. Different cell concentrations were used to study the sensitivity of MRI for detection of melanin (Fig. 5). We found that 231-TYR + Dox cells cultured with FeCl 3 -enriched medium displayed a much higher signal on T1-weighted images (T1WI), compared with 231-TYR and 231 cells (Fig. 5, left). The T1 relaxation times in msec of 231-TYR + Dox cells with the maximum concentration in the sample with and without FeCl 3 were 1216.13 and 2470.91 msec, respectively, indication shortening of the T1 relaxation time by 50.78%. We also found that 231-TYR + Dox cells cultured in FeCl 3 -enriched medium displayed much lower signals on T2-weighted images (T2WI), compared with 231-TYR and 231cells (Fig. 5, right). T2 signal decreased with increasing number of 231-TYR + Dox cells. The T2 relaxation times in msec of 231-TYR + Dox cells with the maximum concentration in the sample with and without FeCl 3 were 29.58 and 84.76 msec, respectively. The iron shortened the T2 relaxation time by 65.1%. The three cell lines cultured in medium without FeCl 3 -enrichment did not produce detectable T1 or T2-weighted signal, and the signals of the control cells with FeCl 3 treatment only slightly increased and decreased T1 and T2 relaxation times, respectively. Cell PAI. Figure 6 shows the photoacoustic signals of different concentrations of cells ranging from 1 × 10 5 to 2 × 10 7 /mL. The cell samples were located 2 mm below the surface of the gel phantoms, and Discussion In this study, we successfully constructed a lentiviral vector complex containing TYR as a reporter gene and used the Tet-on system to control its expression. After transducing TYR into the breast cancer cell line MDA-MB-231, a stable line expressing TYR (231-TYR) was established and screened. We verified that Dox induction could precisely regulate the expression of TYR. Further, we demonstrated that tyrosinase, as a multifunctional reporter gene product, could be used for MRI/PET/PAI multimodality imaging in vitro. TYR has been used as a reporter gene for magnetic resonance imaging 25 . Most previous studies using TYR as a MRI reporter gene have analysed the changes in T1 signal; nonetheless, Fe (III) also has an impact on the T2 relaxation time. In this study, we observed that TYR changed the T1 and T2 relaxation times (Fig. 5), consistent with the signal changes observed in images of pigmented melanoma tumours 33 . Quantitative analysis revealed that the T2 relaxation time changed more than that of T1 in 231-TYR + Dox cells. Photoacoustic imaging (PAI) can be used for functional and molecular imaging with endogenous and exogenous contrast agents. Melanin is a common endogenous contrast agent 34 . In our study, photoacoustic signal changes were only detected in melanotic 231-TYR + Dox cells (Fig. 6). Signal detection was very sensitive, identifying signals from only 5 × 10 4 231-TYR + Dox cells. The sensitivity in our study was lower than the results of Qin et al. 25 , which may be related to difference in instrumentation or different levels of TYR expression. We prepared and evaluated 18 F-5-fluoro-N-(2-[diethylamino]ethyl)picolinamide ( 18 F-5-FPN), which has a high affinity with melanin, in our previous study 23 . In this study, 18 F-5-FPN specifically bound to the melanin in 231-TYR + Dox cells, and blocked with excess nonradioactive standards (Fig. 7), demonstrating the feasibility of TYR as a reporter gene for PET imaging. In the three different imaging modalities, PAI has the highest sensitivity. However, when spatial resolution of 1 mm is necessary, its penetration is less than 5 cm because of the optical attenuation effect. In addition, ultrasound signals cannot penetrate hollow visci or lung tissue owing to the acoustic impedance effect. PET and MRI do not suffer from these limitations. MRI shows a characteristic signal pattern on T1WI and T2WI, with high spatial resolution. 18 F-5-FPN for PET imaging of melanin/melanoma exhibits high specificity, and can provide functional information. Therefore, a single reporter gene for PAI/ MRI/PET multimodality imaging could make up for each modality's shortcomings. Effective control of the time and level of gene expression is better than the sustained expression of gene in gene therapy. Sustained expression of exogenous genes or protein may result in adverse effects and receptor downregulation. The Tet-On 3G system consists of three parts including a regulating unit, reaction originals that connect with the TYR gene, and inducers. Tetracycline repressor factor (Tet repressor, TetR) and ubiquitin promoter (Ubi) compose the regulating unit; TetR is a repressor protein of Tetracycline inducible promoter (TetIIP). TetIIP, an inducible promoter, mediates expression of TYR gene. After TetR inhibition on TetIIP was released using Tet (Dox) combines with TetR, TetIIP will induce TYR gene expression. In our study, the TYR was only expressed in the presence of Dox in 231-TYR cells using western blot, Masson-Fontana silver staining, and immunofluorescence experiments (Figs 2 and 3). Additional studies of the dosage and differences in length of Dox exposure were conducted (Fig. 4). These results confirmed that the Tet-on system quickly responded to Dox, and it could excite the TYR gene expression reversibly, quantitatively and reproducibly. We demonstrated the potential use of TYR for PAI/MRI/PET multimodality imaging in vitro. In the future, its potential as an in vivo probe for multimodal imaging should be investigated for the following reasons: (1) TYR is an endogenous highly biocompatible gene, with the potential for low measurable impact when transfected into amelanotic cells. (2) Dox is an attractive agent for inducing gene expression in vivo. (3) TYR encodes tyrosinase in the transfected cells, which is the key enzyme for synthesising melanin. Melanin is a polymer and contains multiple binding sites for paramagnetic iron ions, while simultaneously binding benzamide radiopharmaceuticals, making PET/MRI feasible. (4) Used as a multifunctional reporter gene for PAI/MRI/PET imaging, TYR may not only solve problems of spatial resolution and sensitivity, but may also enable imaging of microvessels involved in angiogenesis by Doppler photoacoustic tomography. TYR also has potential as a therapeutic agent. Melanin, produced with tyrosine kinase expressed by TYR, significantly enhances the absorption of light in the near infrared, which is characterised by low absorption and maximum light penetration in tissues. Stritzker et al. 35 used a near-infrared laser to specifically transfer energy to melanin. The transferred energy converted to thermal energy, which then heated the melanin-producing cells to a high temperature, causing protein denaturation and cell death. In addition, benzamide and its analogues have been labelled with radionuclides to irradiate melanomas. The resulting low transient uptake in the excretory organs has been promising. These data indicate that systemic radionuclide therapy using benzamides for the therapy of pigmented melanoma is of considerable potential 36,37 . TYR transfection of tumours, causing them to synthesise melanin which is subsequently irradiated by radiolabelled benzamides, may be an effective method of unsealed source therapy. Conclusions We successfully demonstrated that transfected human TYR can induce the production of melanin in amelanotic cells, and the gene expression can be accurately regulated by the Tet-on system. A preliminary in vitro study suggests that TYR, as a single reporter gene, could change T1 and T2 relaxation times on MRI, the signals on PAI, and the accumulation of PET tracer, which suggests its feasibility for multimodality molecular imaging. Further studies in vivo are necessary. (TetIIP-MCS-3FLAG-Ubi-TetR-IRES-Puromycin; 12.4 Kb; Gene Chem Co., Ltd, Shanghai, China). TYR DNA was amplified by a polymerase chain reaction (PCR) with primers flanking the TYR open reading frame with BamHI and NheI restriction enzyme sequences within the 5′ and 3′ primers, respectively, and it was purified using a gel extraction kit (Qiagen ® , Tiangen Biotech Co., Ltd. Beijing, China). The purified TYR encoding the inserted cDNA and the GV308 vector were both digested with BamHI and NheI restriction enzymes (New England Biolabs, Inc., Ipswich MA, USA) and ligated together with DNA ligase (New England Biolabs). The ligation mixture was used to transform E. coli DH5a competent cells, which were plated on LB broth plates supplemented with puromycin and shaken for 24 h at 37 °C. Bacterial colonies and plasmid DNA were isolated from the resulting colonies. After the recombinant plasmid was identified by DNA sequencing and double restriction enzyme digestion, plasmid preparation (Maxiprep ® , Qiagen) was performed, and the concentration of the plasmid was measured. Then, the recombinant plasmid DNA and liposomes were co-transfected into human embryonic kidney 293T cells. We collected the cell supernatant containing the lentiviral particles, concentrated it, and measured the virus titre. The recombinant expression vector was named Lenti-X Tet-on 3G-TYR and stored at − 80 °C. To establish a stable cell line expressing TYR, the MDA-MB-231 cells were seeded into 6-well plates at a density of 5 × 10 5 per well and incubated overnight. We co-transduced these cells with the lentivirus Lenti-X tet-on 3G-TYR (multiplicity of infection, MOI = 2) and a transfection enhancer polybrene (Gene Chem Co., Ltd, Shanghai, China), then the cells were replaced into the complete medium 10 h after the transduction. Seventy-two hours after transfection, cells were trypsinised and diluted to a 1000 cells/mL single-cell suspension, and seeded into 96-well plates by the limiting dilution method. After the cells adhered, we observed them carefully under a microscope, choosing and marking those holes observed to contain only 1-2 cells. The next day, these cells were cultured in L-15 medium with 10% FBS containing 1 μ g/mL puromycin. The medium was changed every 2-3 days. Cells in dishes grew for several weeks until large cell colonies were visible. Dox (2 μ g/mL) was added to each well containing colony cells as an inducer, and carefully observed under light microscopy. The colony with the darkest colour was considered to be capable of producing melanin and termed as 231-TYR cells. This colony was trypsinised from the 96-well plate, cultured and used for subsequent experiments. Construction of the lentivirus vector complex containing Experimental and control groups. The cells were divided into three groups as follows: (1) TYR detection by western blot. The cells in the six-well plates were washed twice with cold PBS (0.01 M, pH 7.2) and dissolved in 300 μ L radio-immunoprecipitation assay buffer containing protease inhibitors. The lysates were centrifuged at 12000 rpm (68.2 g) for 15 min at 4 °C, and the supernatants collected. Total cellular proteins (20 μ g per lane) were resolved using 10% sodium dodecyl sulphate polyacrylamide gel electrophoresis (Bio-Rad ® , Hercules CA, USA) and were transferred to a nitrocellulose filter membrane (Bio-Rad). The membranes were blocked at 4 °C for 1 h in triethanolamine buffered saline solution (TBST) supplemented with 5% non-fat milk. After a brief rinse, the membranes were incubated overnight at 4 °C in TBST containing 5% bovine serum albumin with the primary antibody diluted in TBST (tyrosinase monoclonal antibody, 1:500), (Sigma Chemical Corporation, St. Louis, MO, USA). A glyceraldehyde-3-phosphate dehydrogenase (GAPDH, 1:1000 Santa Cruz Biotechnology, Santa Cruz CA, USA) polyclonal antibody was used as an internal control protein. The blots were washed three times with TBST for 10 min, followed by 1-h incubation with a horseradish peroxidase-conjugated anti-mouse IgG antibody (1:2000, Santa Cruz) at room temperature. The antigen-antibody peroxidase complex was visualised using enhanced chemiluminescence reagents (ECL ® , Amersham Biotechnology, Piscataway NJ, USA) according to the manufacturer's protocol. Assessment of cellular tyrosinase activity. The sample preparation procedure was the same as that described in the western blot assay. After quantifying the protein levels, the concentration of samples was adjusted to 0.5 μ g/μ L. The tyrosinase activity was measured as per the published protocols with some modifications 25 for 35-40 min. After rinsing in distilled water, the cells were quickly incubated with sodium thiosulfate solution for 1 min. Finally, the coverslips were incubated with neutral red staining solution for 5 min, sealed, and observed under a microscope (Nikon Eclipse 90i; Kawasaki, Kanagawa, Japan). Noticeable black particles could be seen in the 231 TYR + Dox cells. Cell immunofluorescence study. The sample preparation procedure was the same as that for the Masson-Fontana silver staining. The coverslips were rinsed with PBS, blocked with 1% bull serum albumin and incubated with a primary antibody (mouse anti-TYR, diluted 1:500; Sigma) overnight at 4 °C. After rinsing in PBS, the cells were incubated with a diluted secondary antibody (Alexa Fluor 488-labelled goat anti-rabbit IgG, diluted 1:200, Beyotime, Beijing, China) at 37 °C for 60 min. Finally, the coverslips were incubated with 4-6-diamidino-2-phenylindole (DAPI; Beyotime) for 5 min, sealed with an agent resistant to quenching, and observed under a confocal microscope (LSM 710: Zeiss, Oberkochen, Germany). Measurement of melanin content in 231-TYR cells regulated by Dox. A sample of the 231-TYR cells was digested, re-suspended and cultured in flasks overnight. Then, these cells were incubated with Dox in serial concentrations (10-4000 ng/mL) at 37 °C for 48 h. Melanin content of these cells was measured as described previously with some modifications 39 . The cultured cells were harvested and washed with PBS. They were incubated in 500 μ L of 1 N NaOH in an 80 °C water bath for 2 h, then the solution was mixed. After determination of protein content, protein concentration was adjusted to 0.5 μ g/μ L, and the extracts were then transferred into 96-well plates in triplicate with 50-μ L aliquots. The relative melanin content of samples was determined by measuring their absorbance at 405 nm. Results were expressed as absorbance of 405 nm per mg protein. The incubation time with Dox was investigated to assess for any effect on melanin production in 231-TYR cells. After the cells were cultured in flasks overnight, they were placed in fresh medium containing Dox (2 μ g/mL), then continually incubated for 0, 1,2,4,8,16,24,36,48, and 72 h. Melanin content at different incubation times was measured as described previously. To assess the impact of Dox on TYR expression, the changes in melanin content after withdrawing Dox at different times were studied in 231-TYR+ Dox cells. We cultured the 231-TYR cells with medium containing Dox (2 μ g/mL) for 48 h, then replaced the medium with fresh medium without Dox. The cells were digested and collected for determination of melanin content after Dox was removed at 0, 1, 2, 4, 8, 16, 24, 36, and 48 h. Cell MRI. Cell phantoms were prepared as follows 25 : the 96-well PCR plates were embedded in a cuboid container filled with 1% UltraPure TM agarose gel (Invitrogen, Carlsbad, CA, USA). After solidification, the tubes were pulled out, and then the bottoms of the resulting holes were filled with 100 μ L of 1% agarose. Different concentrations of cells (100 μ L, ranging from 2.5 × 10 7 /mL to 1 × 10 8 /mL) suspended in 1% agarose were layered into the middle part of the holes, and then the surface of the phantom was covered with thin 1% agarose gel. MRI was performed using a Cell PAI. Agarose phantoms were prepared using the PCR tubes. The bottoms of the tubes were filled with 1% agarose gel in distilled water (150 μ L). After being cooled down, different concentrations of cells (50 μ L) ranging from 1 × 10 5 /mL to 2 × 10 7 /mL suspended in 1% agarose were filled into the middle part of the tubes, then the tops of the tubes were filled with 1% agarose. An acoustic-resolution photoacoustic microscopy system independently manufactured by the National Laboratory for Optoelectronics, Huazhong University of Science and Technology (Wuhan, China) was used to acquire photoacoustic images with a laser at excitation wavelength of 532 nm, a focal depth of 6 mm, pulse width of 6 ns and pulse repetition of 30 Hz. Cell uptake studies of 18 F-5-FPN. Preparation of 18 F-5-FPN was conducted with the same protocol as described in our previous study 23 . The cellular uptake studies were performed in all experimental and control groups (231-TYR + Dox, 231-TYR, and 231 cells). Cells at a density of 1 × 10 5 per well were seeded in 24-well plates and incubated overnight. Then, the cells were incubated with 0.2 mL of medium containing 37 kBq (0.5 pM) of 18 F-5-FPN at 37 °C. At 30, 60, or 120 min after incubation, the medium was removed and cells were washed three times with PBS (pH 7.4) and lysed with 1 N NaOH for 5 min at room temperature. The radioactivity of the cell lysate was measured by a gamma counter (2470, WIZARD; PerkinElmer, Waltham MA, USA). For the cell efflux study, these cells were implanted into plates overnight. 18 F-5-FPN (37 kBq, 0.5 pM) was added to each properly and incubated for 2 h at 37 °C. After being washed twice with PBS, the cells were incubated in a culture medium for 15, 30, 60 Scientific RepoRts | 5:15502 | DOi: 10.1038/srep15502 or 120 min. Then, the cells were lysed with 1 N NaOH. For the blocking study, 1 × 10 5 231-TYR + Dox cells were seeded overnight and were incubated at 37 °C for 1 h with 18 F-5-FPN (37 kBq, 0.5 pM) in the presence of 100 μ L standards 19 F-5-FPN (10 −12 to 10 −5 M). Then, the cells were washed and the radioactivity measured as with the celluar uptake study. Statistical analysis. Quantitative data were expressed as mean ± standard deviation (SD). Means were compared using one-way ANOVA and the Student's t-test with P < 0.05 indicating statistical significance.
5,699.4
2015-10-20T00:00:00.000
[ "Biology" ]
Redesigning the Serpent Algorithm by PA-Loop and Its Image Encryption Application This article presents a cryptographic encryption standard whose model is based on Serpent presented by Eli Biham, Ross Anderson, and Lars Knudsen. The modification lies in the design of the Cipher, we have used power associative (PA) loop and group of permutations. The proposed mathematical structure is superior to Galois Field (GF) in terms of complexity and has the ability to create arbitrary randomness due to a larger key space. The proposed method is simple and speedy in terms of computations, meanwhile it affirms higher security and sensitivity. In contrast to GF, PA-loop are non-isomorphic and have several Cayley table representations. This validates the resistance to cryptanalytic attacks, particularly those targeting mathematical structures. This cryptographic scheme’s full description of encryption and decryption is measured and rigorously assessed to support its multimedia applications. The observed speed of this technique, which uses a key of 256 bits and a block size of 128 bits, is comparable to three-key triple-DES. I. INTRODUCTION Extensive deployment of soft computing devices has changed the overall communication pattern around the globe. All these devices are connected via internet relying on unsecure medium. The exponential growth of soft computing devices has got some disadvantages like insecure communications, violation of copyright protection and alteration in valuable information. Even the communication in terms of images is also exaggerated by such threats. Generally, to reduce the impact of these, encryption is considered a healthier tactic to attain the higher security level. For that reason, image encryption has achieved extensive importance in Internet communication, medical imaging, multimedia systems, telemedicine etc. Encryption schemes are usually categorized in two main divisions, spatial domain and frequency domain. The permutation of positions, transformation of pixel values and their The associate editor coordinating the review of this manuscript and approving it for publication was Wei Huang . amalgamation is used in spatial domain. Literature reveals many encryption schemes in this domain, but the prominent schemes keeping [1] in mind are 2D cellular automatabased methods [2], tree structure-based schemes [3], and chaos based cryptosystems [4], [5], [6], [7]. In [8], quad tree structure is used for encryption which in result reduced the processing time of both encryption and decryption. But it has not gained space in international standards. Similarly, many chaos based schemes [10], [11], [12] are proposed due to specific attributes like sensitivity to initial conditions, randomness, ergodicity and complex bifurcation pattern. Certain loopholes appearing in such cryptosystems can be minimized by using higher dimensional chaotic systems. Usual encryption schemes based on chaotic maps generally uses two processes i.e. substitution and diffusion, that are iterated for a certain value. Pixels of images are substituted by the outcomes from chaotic maps which are altered in diffusion stage by a certain sequential rearrangement. One small alteration in pixels results in total dissimilar output after certain iterations. Such kind of schemes are very common in literature [13]. A. RELATED WORK There are some techniques that make use of their own proposed structure. Still speed and security is an issue in such schemes. These drawbacks create space for new cryptosystems. After spending a successful period, DES [14] algorithm at the start of 21st century lost its popularity. The first allegation on it was of shorter key length i.e. 256 bit key, which can be traced by exhaustive key search in ever increasing growth of fast computing devices. Although, this was addressed by introducing triple DES. But another objection was its application in software encryption, although its creation was designed for hardware enciphering. Due to this drawback, NIST in US welcomed the new and vibrant inheritor algorithm, which was later called as advanced encryption standard (AES) [40]. The distinction of AES on predecessor was due to the two reasons, first it was speedy enough to cope up with new technological development of 21st century and meanwhile it did not compromise on security. Image encryption using block cipher-based serpent algorithm is presented in [15] A proposal algorithm for images protection is depending on the block cipher serpent algorithm in Feistel network structure. Then another scheme for the improvement of serpent algorithm and design to RGB image encryption implementation is present in [16]. Our remaining manuscript is structured as follows: Section II presents the fundamental introductions, the algebraic structure of PA-loops, and the synthesis of our suggested S-boxes. The purpose of Section III is to evaluate the effectiveness of the recently proposed S-boxes in comparison to a few well-known S-boxes. The IVth Section is where the suggested S-boxes' image encryption application is carried out. In section V, the statistical analyses of the proposed S-box image encryption system are compared to those of other well-known schemes. Sections VI and VII in order provide the differential analyses of the scheme and conclusion. B. OUR CONTRIBUTION This article explains that applications of S-box in Serpent Algorithm with the image encryption. The step wise contribution of our manuscript is as follows, 1) The construction of S-boxes by utilizing mobius transformation over PA-Loop is explained. Its enssuerd that proposed technique is good for image encryption. 7) We have also calculated time excutaion for our proposed encryption secheme, and compare with other algorithm. II. PRELIMINARIES Few basic definitions of PA-loops and its comprehensive structure is highlighted in this section. In addition to this, the forming of the S-box by using this new structure is also explicated. Laterly, the application of symmetric group S 16 is also being in consideration. A. POWER ASSOCIATIVE LOOP For quasi group, it is necessary that groupoid (A non empty Set with binary operation) must satisfy the cancellation laws. If there exsits a two sided identity e ′ ∈ L ′ (L ′ is non-empty set having a binary operation) and for each c, d ∈ L ′ the equations cx = d and yc = d for all x, y ∈ L ′ have unique solution then L is named as loop. Moreover, if the subloop generated by any element of L is a cyclic subgroup then non-empty set L is named as Power Associative Loop (PA-Loop). It can be expressed as Each subloop generated by a single element is a cyclic subgroup in PA-loops [12]. Due to absence of associative property in PA loop, so it has more varieties of structures which is not possible in associative structures like groups and rings.To express this truth, Table 1 indicates the difference of available possibilities between power associative loops and group with same order. B. DESIGN OF S-BOXES OVER POWER ASSOCIATIVE LOOP In different cryptosystems different methods are used to generate confusion in the data. However, substitution boxes are the best source for confusion in the literature. Most of these structures are depended upon the Galois field and some are belonging to Z n 2 which is n topples of binary field Z 2 . These classes are associative therefore show limited impact as shown in Table 1. Power associative loop has more structures as compared to groups and Galois field due to non-associative, which give us different choices to design S-boxes. The variety of S-boxes make the cryptosystems secure and help to resist the spiteful attacks. For the constructions of S-boxes many techniques are given the literature from which Mobius transformation is one of them. To create several different S-boxes by mobius transformation which is action of projective general linear group on a power associative loop of order 256. The mathematical expression of this technique is given below: Value of c and e are to be fixed 4 and 9 respectively but d and f vary from 0 to n − 1. Take the values of y= 0 : n − 1 then use table of PA-loop see value corresponding to e * y, c * y after that convert the system into binary number. Apply XOR in numerator and denominator and simplify utilizing the table of power associative loop. After simplification exponent give us a new transformed substitution box. We construct 131028 S-boxes by varying the values of f and d.Flow chart of this scheme is given in a Figure 1. Table 2 shows the designing of S-box entries while Table 3, Table 4, and Table 5, show three different S-boxes. III. ANALYSES OF S-BOXES It is required to check the strength of proposed S-boxes by using different algebraic and statistical analysis. In this section, we also evaluate our S-boxes with the help of histogram analysis. In Boolean function, the distance between one function and the remaining set of all affine functions is defined as nonlinearity. Also, the total number of bits that need to be transformed in the truth table to obtain the closest affine function. where WHT max indicates the walsh Hadamard transform vector's maximum absolute value [20]. The optimum value of nonlinearity is 120. The comparison of our proposed S-boxes and some other existing S-boxes is depicted in Figure 2. Our S-boxes also have good results as compared [37], [38], [39], because propsed S-box have non linearity 111.5. 2) BIT INDEPENDENT CRITERION Webster and Tavares defined the output Bits Independence Criterion (BIC) to assess the S-box. They advocated that every avalanche inconstant necessarily pair-wise independent for a specified set of avalanche vectors. We can simply add single plaintext bit and obtain these avalanche vectors [20]. The results of BIC of proposed S-boxes indicate the standard outcomes when they are compared with few standard S-boxes. Also, the smallest, middling and square deviation value of BIC is shown in Figure 3. The comparison of the smallest, middling and square deviation values of BIC of new S-boxes with other S-boxes are also given in Figure 3. 3) STRICT AVALANCHE CRITERION ANALYTICALLY For any proposed S-box, strict avalanche criterion (SAC) is fulfilled if a alteration in a single input bit provides an influence on semi of the output bits. For instance, S-box is functioned to build an S-P network, then a single modification in the input of the network provides an avalanche of variations [20]. It can be seen in Figure 4 the outcomes of the SAC of proposed S-boxes. Moreover, the middling, smallest and square deviation value of SAC in evaluation with different existing S-boxes are given away in Figure 4. the mask y. According to Matsui's original description [26], linear approximation probability (or probability of bias) of a given S-box is defined as, where x is input masks and y output masks, the set of all possible inputs represented in X and 2 n is the number of its elements. The LP value of newly designed S-boxes and comparison with other S-boxes are given in Figure 5. 5) DIFFERENTIAL APPROXIMATION PROBABILITY In differential approximation probability x i input differential should uniquely map to an output differential y i , to ensure a uniform mapping probability for each i. The differential approximation probability of a given S-box (i.e. DPs) is a measure for differential uniformity and is defined as 29702 VOLUME 11, 2023 In round 2, second row of S-box and permutation P 2 is used and above method is repeated for {n 1 , n 2 , n 3 ,. . . , n 16 }, similarly we perform 16 th round, in the last round we select the 16 th row of S-box and permutation P 16 for further utilization using the same pattern to obtained {y 1 , y 2 , y 3 ,. . . , y 16 } that lies in the range 0 to 255. This is 16 bytes or 128-bit cipher text. For image encryption scheme we use three different S-boxes which are constructed above. Figure 7(a-d) show the encrypted images by using this scheme and Figure 8 show the flow chart of this encryption scheme. Figure 9 shows the layer wise encryprion by using this scheme. For decryption we use the reverse process by using the inverse S-boxes. Figure 10 shows the decrypted images by using decrpyion process. V. INVESTIGATIONAL UPSHOTS AND SIMULATION ANALYSES In any investigation of designed cryptosystems, the ultimate gauge is to measure the outcomes of different analyses. The astonishing fact connected to any research is the disclosure of false outcomes after a long and hectic tiresome job. Sometimes, for scientist and engineers, it become really hard to identify the wrong step. Still it's an interesting task for many. The efficacy of any scheme is established right after the complete investigation of analyses. For this argument, simulation analyses of proposed scheme are given hereafter. A. KEY SPACE ANALYSIS In this analysis the total number of keys used in algorithm are debated. If the total volume of keys used in a cryptosystem are higher than it bears more strength against any exhaustive key search. For a chaotic cryptosystem, the key space greater than 2 100 [26] is proposed as secure enough. B. KEY SENSITIVITY ANALYSIS Key sensitivity is an essential criteria to be fulfilled by a robust cryptosystem. This assures that any wrong guess will totally change the output obtained from enciphering algorithm. Conversely, with a wrong set of keys the decryption should generate a totally different and wrong original input. PA-loop used in this article successfully satisfies the sensitivity test. C. CORRELATION ANALYSIS Pixels are building blocks of images. These are numeric values that are highly correlated with neighboring pixels in all three directions i.e. horizontally, vertically and diagonally. In an enciphered image, the correlation values must approach to zero. This is the main objective of cryptographic algorithm to achieve in any scheme of image encryption proposed by researchers. In result, the rearrangement of pixel values to original one becomes extremely difficult for an assaulter. Table 6 show the correlation analysis of proposed scheme and comparison with other well known schemes. D. HISTOGRAM ANALYSIS Histogram analysis of an image provides the information about tonal distribution. This graph is obtained by plotting total amount of pixels in a certain tone along y axis whereas x-axis represent single tonal value. Lighter and darker portion of images are represented on left and right side of graph respectively. The scheme of encryption tries to distort the original combination of pixels which makes the histogram flat as well after these operations. Figure 11 and Figure 12 shows the histogram of original and encrypted image of Lena and Baboon respectively. VI. DIFFERENTIAL ANALYSES Differential analysis sometimes also known as sensitive analyses. These are used to retrace/retrieve an original image. There are two major divisions of these namely number of pixels change rate (NPCR) and unified average changing intensity (UACI). A. NPCR AND UACI NPCR measures the effect of change on an encrypted image by varying only single bit. It tells us the amount of pixels changed by this increment. Its standard value showing good encryption scheme lies nearer to 99 using its formula mentioned at the end of this paragraph. This affirms its strength against differential analysis. Whereas unified average change intensity (UACI) measures the difference of intensities between original and enciphered image. Its value that is considered acceptable lies nearer to 33% as per formula defined below. Table 7 explains the values of NPCR and UACI. B. CHI-SQUARE TEST Since pixels are building blocks of digital images. These are highly correlated with each other in neighboring regions to produce a certain kind of shade. Their distribution in terms of uniformity is measured statistically by using Chi-square test while the same is analyzed pictorially in histogram analysis. In chi-square test, the observed and expected values are used to attain significance level. The formula is given as: Here, i represent the intensity level of image and the expected value is 256 for 256 × 256 image. The outcomes are verified in the chi-square distribution table with 0.05 and 0.01 significance level. For 255 degree of freedom, the critical values with 0.05 and 0.01 probability are 293.2478 and 310.457 respectively. Table 8 shows the chi-square values generated from the encrypted Lena image using the proposed scheme. It also reveals that the hypothesis is accepted with 0.05 and 0.01 level of significance, which means the pixel distribution is uniform. C. TIME EXECUTION PERFORMANCE The present-day world is totally focusing on time taken by the machines to complete their assignment. Old fashioned devices consume more time and hence energy in achieving their goals. Same is the idea here that any proposed scheme should execute its job in short interval of time. For a bigger real life data, the execution time should be minimized to seconds and even lesser. For calculating the time of the proposed work, we use a system having Processor: Intel R CoreTMi7-8565U CPU @ 1.8GHz 1.99 GHz, RAM: 8 GB and operating system: 64 Bit operating system ×64-based processor. The language used here is python version 3.6. Table 9 explains the time executation. 29708 VOLUME 11, 2023 D. INFORMATION ENTROPY This analysis deals with level of randomness achieved. The amount of randomness give the impression of true efficacy of a cryptosystem. Information entropy (IE) calculate this randomness and unpredictability as defined by the equation defined below, where the probability of random variable u j is used to calculate IE. The best optimal value of an encrypted image is 8. Any kind of enciphering technique generating the outcomes of IE nearer to 8 are considered as robust and secure. Such encrypted images are when observed pictorially, they generate a flat histogram curve i.e. authenticating randomness and unpredictability. Table 10 represents the outcomes of IE of the proposed scheme vs some well-known cryptosystems. where P(u j ) denotes the probability of a r.v u at jth index. VII. CONCLUSION This article presents a modified scheme of encryption i.e. modification of serpent. The construction of S-box is different over here. It is developed using PA-Loop. The superiority of the structure over the extended binary Galois field is due to the larger key space i.e. larger number of possibilities are available here as compared to Galois field. PA-Loop have many representations in terms of Cayley table as compared to one Cayley representation in GF. It includes 128 bits key along with PA-Loop of order 256. If an attacker has the knowledge of key but don't have any information about loop, he can't succeed to break this. Moreover, the proposed mathematical system is noncommutative making it harder to break. One of this scheme's key benefits is that it works for both text and image encryption. The suggested scheme was examined using various analyses to determine its viability. All of the standard tests had positive outcomes, indicating that they could be used in real-world situations. , Lahore, as an Assistant Professor of mathematics. He is currently a Co-Coordinator of mathematics with UE Vehari Campus, Pakistan. He has teaching experience of over ten years at different levels and supervised more than 15 M.S./M.Phil. students and one Ph.D. scholar sofa. He has been supervising many students and performing other administrative duties as well. He has published more than 25 research articles in internationally reputed journals. SAYED M. ELDIN is currently with the Faculty of Engineering and Technology, Future University in Egypt, on leave from Cairo University after nearly 30 years of experience with the Faculty of Engineering, Cairo University. He was the Dean of the Faculty of Engineering, Cairo University, where he achieved many unique signs of progress in both academia and research on the impact of emerging technologies in electrical engineering. He was a PI of several nationally and internationally funded projects. He has many publications in highly refereed international journals and specialized conferences on the applications of artificial intelligence in the protection of electrical power networks. He is on the editorial boards of several international journals.
4,565.4
2023-01-01T00:00:00.000
[ "Computer Science", "Mathematics" ]
Moving Cages Further Offshore: Effects on Southern Bluefin Tuna, T. maccoyii, Parasites, Health and Performance The effects of offshore aquaculture on SBT health (particularly parasitic infections and haematology) and performance were the main aim of this study. Two cohorts of ranched Southern Bluefin tuna (SBT) (Thunnus maccoyii) were monitored throughout the commercial season, one maintained in the traditional near shore tuna farming zone and one maintained further offshore. SBT maintained offshore had reduced mortality, increased condition index at week 6 post transfer, reduced blood fluke and sealice loads, and haematological variables such as haemoglobin or lysozyme equal to or exceeding near shore maintained fish. The offshore cohort had no Cardicola forsteri and a 5% prevalence of Caligus spp., compared to a prevalence of 85% for Cardicola forsteri and 55% prevalence for Caligus spp. near shore at 6 weeks post transfer. This study is the first of its kind to examine the effects of commercial offshore sites on farmed fish parasites, health and performance. Introduction Offshore aquaculture is in its infancy worldwide, yet commercial development is underway in numerous countries including USA, Ireland, Norway, Spain, Italy, Malta, Belgium, Scotland, UK, Japan, and Australia [1]. There are numerous factors which distinguish between a near shore and an offshore site, including location or hydrography, the in-water and above-water environment, the ease of access and associated operation logistics, yet no formal international definition has been made. In the context of this study, offshore was defined by reduced access (i.e. remoteness) and increased exposure to the environment, both in-and abovewater. The attractiveness and potential benefits of moving aquaculture cages further from the shore are many, including fewer limits to the scale of operation, enhanced water quality, lower costs of environmental monitoring, reduced interaction with urban populations and inshore environmental concerns, and reduced disease risk [2]. In addition, moving cages from near shore to offshore sites may be necessary in the future due to the many anticipated effects of climate change [3]. Yet, many of these assertions have been insufficiently tested and the commercial feasibility of offshore development is presently unknown. Offshore aquaculture has not been extensively developed for many reasons. Moving farther offshore is capital intensive, leading to increases in operation and servicing costs which need to be outweighed by potential performance benefits of the cultured species. There are also investment uncertainties related to the optimal configuration of sites, species most suitable to the exposed conditions, and lack of necessary technology [1][2]. Technology does not just refer to strong farming structures cages; it also concerns advanced feeding techniques, communication, mortality retrieval systems, and monitoring systems, which allow management of stocks that are not easily accessible [1]. In addition, sufficient testing of the feasibility of offshore aquaculture also requires large baseline datasets in full commercial scale, which are absent in the current published literature. Southern Bluefin Tuna (SBT) have been ranched in near shore cages in Port Lincoln, South Australia since 1991. In Australia, schools of 2-4 year old wild SBT are captured by purse seine and, carefully towed back to the Tuna Farming Zone (TFZ) in Spencer Gulf near Port Lincoln, South Australia where they are transferred into several grow-out cages and fattened on baitfish for three to six months. As a member nation of the Commission for the Conservation of the Southern Bluefin Tuna (CCSBT) and with sustainability in mind, the Australian SBT industry strictly adheres to catch quotas, quantified upon the arrival of a tow cage within the TFZ, prior to the start of ranching. Large commercial scale baseline datasets have been collected for several decades concerning environmental monitoring, stock performance and health, and economic viability of ranched SBT maintained in the TFZ (Australian Southern Bluefin Tuna Industry Association (ASBTIA) pers. comm.), enabling future research into alternative husbandry practices, such as site. In addition, current private investments made by the Australian SBT ranching industry into technology and operations infrastructure can be easily translated to the offshore environment. The aim of this project was to examine the feasibility of offshore versus near shore aquaculture using the ranching of SBT in Port Lincoln, South Australia as a case study. In this study, feasibility was measured through SBT health, i.e. parasite loads and haematology, and performance, i.e condition index and survival. Although cost cannot be directly considered in this study due to commercial confidentiality restrictions, economic implications are discussed. Ethics Statement All work with animals, samples and methods for recovering samples were approved by the University of Tasmania board of animal ethics, project number A0010593. Experimental Fish and Site Characteristics Two different cohorts of SBT were captured by purse seine in the Great Australian Bight in February 2010. Each cohort was transported to the (TFZ) in a separate towing cage. The near shore cohort of 9165 SBT was transferred on 14/3/2010 into three grow out cages and the offshore cohort of 7300 SBT was transferred on 15/3/2010 into three grow out cages. The near shore site was located at 34u 40.299' S, 136u04.708' E and the offshore site was located at 34u 44.409' S, 136u22.703' E ( Figure 1). A complete description of the hydrology other environmental parameters for each site can be found in Table 1. Transfer procedures were identical between both sites, carefully stipulated and monitored by the Australian Fisheries Service as part of the CCSBT quota allocation guidelines. SBT were stocked at an initial cage density of 3.32 kg m 23 for the near shore cohort and 3.37 kg m 23 for the off shore cohort. SBT were fed frozen sardines at an average rate of 0.8 kg SBT 21 day 21 for their entire ranching period. In 2010, commercial sites within the near shore TFZ ranged in size between 63 and 341 ha, with an average biomass at harvest of 2,283 kg ha 21 . The whole near shore TFZ is 17200 ha, of which 1569.5 ha was under commercial lease in 2010. The commercial offshore site was 100 ha, with an average biomass at harvest of 3,087 kg ha 21 . In 2010 the whole offshore site was 38350 ha. Sample Collection Field Collection. Ranched SBT are wild and each cohort consists of several schools of SBT mixed together, therefore individual SBT were used as replicates for this study, not cages of SBT, to measure the effect of ranching site on SBT. Sampling was limited due to the high commercial value of the fish, limiting total sample size to 100 SBT. Three sampling time points of samples were chosen for this study: at transfer into grow out cages to demonstrate initial differences between the cohorts of fish, at week 6 of ranching duration to demonstrate effects of site on ranching performance and at week 23 to demonstrate effects of site on long term ranching performance. Week 6 was chosen as the most important sampling time point for two reasons: (1) limited effects of captivity on ranched SBT prior to 6 weeks have been observed and (2) a significant mortality event and health changes are known to occur at week 6 in near shore ranched SBT [8]. Samples were collected from both the near shore and offshore SBT at transfer (n = 10 per site), week 6 (n = 20 per site) and week 23 at harvest (n = 20 per site) post transfer. Transfer samples were collected during transfer from the tow cage to the grow-out cage and the week 6 and week 23 samples were collected during commercial harvests. At the initial and week 6 time points, SBT were sampled using a baited hook and line. Divers caught the SBT at the 23 week sampling. The total time between capture and killing of each SBT was less than one minute for both catching methods. Once on the boat, SBT were immediately spiked in the head, the brain was removed using a 'Taniguchi tool' (core), and a wire was placed down the spine to destroy the spinal nerves. Length and weight was recorded for all SBT at the time of sampling. At transfer and week 6, SBT were weighed whole, but at week 23 SBT were weighed after the gills and viscera were removed due to space limitations associated with large commercial harvests. Weight for SBT sampled in week 23 was corrected by dividing weight (kg) by 0.87 [9]. Condition index was calculated for each sample using the formula: weight (kg) divided by length (m) 3 . Immediately after external surface examination, whole blood was collected from the severed lateral artery in the pectoral recess in two 10 ml tube (Sarstedt, Ingle Farm, South Australia), one heparinized and one non-heparinized, and placed on ice. Blood was collected within 3 minutes of fish capture. During transfer and week 6 sampling, parasites were quantified. External metazoan parasites were quantified from both the skin and gill arches using the naked eye during killing or as soon as possible after the SBT were killed. All lice visible to the naked eye were collected as soon as possible; any additional lice remaining on tuna surfaces were then detected using a technique described previously [9]. Parasites were not quantified from the week 23 samples as previous studies have determined parasites loads to peak on ranched SBT earlier in the ranching season [10]. The gills and viscera were then excised. The heart was placed in a waterproof tub, the visceral organs were placed in a waterproof bag and both stored on ice. Laboratory Processing. The heparinized vial of whole blood was used for whole blood and plasma aliquots. Three 500 ml aliquots of whole blood were transferred into 1.5 ml plastic tubes and frozen at 220uC. The remaining blood was centrifuged at 3000 xg at 4uC for 5 minutes. Blood plasma was aliquoted into five-1.5 ml plastic tubes, and frozen at 220uC. The nonheparinized vial of whole blood was used for serum collection. Vials were stored upright at 4uC for 24 hours, centrifuged at 1000 xg at 4uC for 5 minutes, and serum aliquoted into three 1.5 ml tubes. Serum samples were stored at 220uC. Hearts were dissected 2-4 h after removal from the carcass and flushed with physiological saline to dislodge any adult Cardicola forsteri [see 11]. Flushes were then poured into Petri dishes and examined for the presence of adult C. forsteri using a dissection microscope. Blood Variables: Hematology. Haemoglobin concentrations were determined from whole blood aliquots using the cyanometahaemoglobin assay based on [12] modified by [13]. Blood plasma glucose and lactate were measured using Accu-ChekH Advantage II and Accutrend H Plus by Cobas, respectively. The pH of blood plasma samples was measured using a Minilab Isfet pH meter Model IQ125 (IQ Scientific, USA). Blood plasma osmolality was determined using a Vaproß Model 5520 vapour pressure osmometer (Wescor Inc., Logan, Utah, USA). Blood Variables: Humoral Immune Response. Blood serum was analyzed in triplicate for lysozyme activity and alternative complement activity. Lysozyme activity was measured using a method based on that described by [14] modified by [13]. Blood serum alternative complement activity was measured using a modified [15] method as described by [13]. Statistical analyses. Parasite infections were characterized by prevalence (the number of host infections as a proportion of the population at risk), mean intensity (the average number of parasites per infected host) and mean abundance (the average number of parasites in all hosts) [16]. Sterne's exact 95% confidence intervals were calculated for prevalence, and 95% bootstrap confidence intervals (with 2000 replications) were calculated for mean abundance, using the software 'Quantitative Parasitology 3.0', supplied by [17]. The prevalence and mean abundance for each species were compared between treatments and ranching durations in a pairwise fashion. Given the high total number of pairwise comparisons, a = 0.01 was regarded as significant for these statistics. All other performance, haematology, and immunology results were interpreted using the R 2.8.1 statistical package (ß 2008, The R Foundation for Statistical Computing). Survival was assessed using log-rank test for equality of the two Kaplan-Meier survival curves, one for each treatment. Condition index and blood parameters were analyzed for differences between each treatment at each sample date, and differences within a treatment for all sample dates using ANOVA. The assumption of homogeneity of variances was checked by the residual plot and Bartlett test and variables transformed when necessary. The Tukey HSD post-hoc test was applied at a significance level of a = 0.05, to determine differences between the explanatory variables. Plasma pH was log 10 transformed due to failure to pass the Bartlett test for normalcy. Results While the offshore cohort began ranching (i.e. at transfer) with a lower condition index (F = 5.7614, df = 1,18, p = 0.0274), their condition increased considerably between transfer and week 6 of ranching (Table 2). At week 6, the offshore cohort averaged higher condition index compared to the near shore cohort (F = 5.5738, df = 1,38, p = 0.0235). The offshore cohort maintained condition index from week 6 to week 23 of ranching (p.0.05), while the near shore cohort continued to increase to a condition equal to the offshore cohort by week 23 (F = 0.569, df = 1,37, p = 0.4554). Changes in condition index can be expected to occur in an asymptotic fashion during fattening, quickly increasing from low to medium conditions and increasing much slower at higher condition. The offshore cohort had higher survival through the ranching period (x 2 = 107, df = 1, p,0.001) ( Figure 2), with at 5.6% Table 1. Comparison chart of the remoteness, above water environment and hydrology of the two farming sites, near shore in the Tuna Farming Zone (TFZ) and offshore. Distance from shore is represented as distance from port. Wind Speeds were measured using weather stations at Boston Island for the near shore site and an average of stations at Spilsby Island and Thistle Island (Figure 1). doi:10.1371/journal.pone.0023705.t001 Table 2. Mean 6 SE for length (cm), weight (kg), condition index and blood parameters in ranched SBT at transfer, week 6 and week 23 of ranching in the near shore and off shore cohorts. Near shore Offshore cumulative mortality compared to 10% of the near shore cohort. Initial mortality was higher in the offshore cohort, and may be attributed to different conditions on the tow from the capture site to the lease site or due to initial unfavorable conditions at the offshore lease site early in the ranching season. Nearly 80% of the total mortality in the offshore cage occurred in week 1 and 2 of ranching. The near shore cohort had little initial mortality, with 84% of the total mortality occurring between week 8 and 12 of ranching. An approximate 100 SBT were unaccounted for in the weekly mortality counts for the offshore cohort. Upon consultation with the farm manager, it was assumed these fish were victim to either shark attacks or poaching, although neither assumption can be confirmed. Cumulative mortality in the offshore cohort was 2.5% when the unaccounted SBT were not included in the calculation. At week 23 of ranching, the offshore cohort had 1.5 g dL 21 higher haemoglobin concentration (F = 15.920, df = 1,38, p,0.001) and 20 mmol kg 21 higher osmolality (F = 9.7547, df = 1,38, p = 0.003) compared to the near shore cohort ( Table 2). While blood plasma lactate was higher in the offshore cohort at transfer (F = 20.592, df = 1,18, p,0.001), it was not different at week 6 or week 23 of ranching (Table 2). Therefore the initial difference may be attributed to differences between the cohort in tow and/or transfer conditions, not effects of location. No other differences were found in blood parameters or performance between cohorts (p.0.05) ( Table 2). Offshore fish had lower prevalence (p = 0.048) and mean abundance (t = 22.366, p = 0.0235) of Caligus spp at 6 weeks of ranching compared to near shore SBT. While offshore SBT maintained low Caligus infections from transfer to week 6, the prevalence in the near shore cohort increased from 0 to 55% (p = 0.004) and the mean abundance increased from 0 to 0.65 Caligus per fish (t = 3.901, p = 0.0045) ( Table 3). There was no Cardicola forsteri infection within the offshore cohort between transfer and week 6 of ranching. Prevalence of C. forsteri in the near shore cohort increased from 20 to 85% (p = 0.001), mean intensity from 1 to 4.18 flukes (t = 4.452, p = 0.001), and mean abundance from 0.20 to 3.55 (t = 4.741, p = 0.001) over the same time period (Table 3). No differences in prevalence, mean intensity, or mean abundance of Caligus sp or Cardicola forsteri were found between the two sites at the start of ranching (i.e. at transfer). There was no effect of sampling date or location on the mean intensity of Caligus sp.. No differences were found in prevalence, mean intensity, or mean abundance of gill parasites (Hexostoma thynni, Pseudocycnus appendiculatus, and Euryphorus brachypterus) ( Table 3). Discussion SBT maintained offshore had better survival, lower Caricola forsteri and Caligus parasite loads, and the hematology of SBT ranched offshore was equal to or exceeding SBT maintained in the traditional near shore ranching environment. These results suggest the offshore cohort may be able to respond better to ranching compared to SBT maintained near shore possibly due to better environmental conditions. The observation of improved survival within the offshore cohort is the most significant outcome of this experiment. An average 6-14% cumulative mortality has been reported across the industry, occurring mostly in a restricted period from 6 to 12 weeks of ranching [8]. This annual mortality event has been observed within ranched SBT in South Australia since 1997 (ABSTIA pers. comm.) and was also observed within the near shore cohort of SBT in this study. The timing of this mortality, the duration of the event, and its severity is observed to vary annually [18], between tows [13,18], by the timing of tow arrival within a season, between companies or husbandry techniques [18], and even between cages within the same tow [9,13]. Because the cause of the annual mortality event is unknown, it cannot be conclusively stated whether or not the offshore cohort maybe impacted in the future. Yet, current results suggest maintaining fish offshore may prohibit exposure of fish to the near shore mortality event, therefore maintaining enhanced survival in the future. A future study is currently underway to determine if temporary offshore holding can offer similar benefits to survival through to harvest. The offshore cohort also demonstrated enhanced condition earlier in the season. Although the offshore cohort had a lower condition index at the beginning of ranching than the near shore cohort, they quickly gained in condition, both surpassing the near shore cohort and obtaining a condition equivalent to harvest quality by week 6 of ranching. The ability to have SBT reach harvest condition as early as possible in the ranching season is advantageous for the Australian commercial SBT operation, as it allows them to market fish earlier in the season for the fresh market. Not only does each SBT sold on the fresh market obtain a higher market value as opposed to the frozen market at the end of the season, but early stock harvests reduce feeding and maintenance costs (ASBTIA pers comm.). The observed enhanced condition may be attributed to the lack of stress and an improved ability to convert feed into growth, and/or may demonstrate an improved ability to acclimate to ranching more quickly than those SBT maintained near shore. Another promising result of offshore SBT ranching is a reduction in sealice and blood fluke infections. Described in 1997 [19], a blood fluke, Cardicola forsteri is currently a common and prevalent infection in ranched SBT [9,11,13], usually infecting up to 100% of ranched SBT after two months of captivity [11]. In 2004, C. forsteri was identified as one of the most significant risks associated with Australian ranched SBT [20], therefore reduction of this infection is an important result for the commercial industry. It is possible that the greater depth and current velocity may offer protection against infection by decreasing the incidence of cercariae within the cages as the intermediate host known to be a benthic terebellidae polycheate, Longicarpus modestus [21]. It may also be possible that the intermediate host is absent from the offshore site as its distribution is not known. Finally, enhanced health condition of the offshore SBT may also reduce infection success. Ranched SBT are able to develop a specific antibody response against C. forsteri and reduced infection burdens have been observed over ranching duration [22]. Research is currently underway to determine the trigger for specific antibody production to C. forsteri and its effects against current infection and future exposure. Further research is needed on the biology and distribution of the intermediate host and behavior and biology of the cercariae to assess a ranching site's risk for C. forsteri infection. The lack of Cardicola forsteri infection observed within the offshore cohort provides a unique opportunity to investigate some of the claims of C. forsteri induced performance and health effects on SBT. It has been commonly assumed all ranched SBT maintained at the traditional near shore location are exposed to C. forsteri cercariae as soon as they enter the farming zone [11]. In the past researchers were not able to uncouple the effects of captivity from the effects of infection due to the 100% prevalence of C. forsteri within ranched SBT. It has been suggested C. forsteri infection may cause a reduction in haemoglobin concentration [9], an elevation in lysozyme concentration [8][9], and an elevation in alternative complement activity [8]. While there was no C. forsteri within the offshore cohort compared to 85% prevalence within the near shore cohort at week 6 of ranching, there was no difference between cohorts in haemoglobin concentrations or humoral immune response. It is therefore unlikely that the observed mean intensity of infection with C. forsteri induces a haemoglobin reduction or changes in humoral immune response, although infection intensities found in the near shore cohort were low and highly variable which may mask or prevent significant haematological differences. It may also be possible that the effect on haemoglobin or humoral immune response to C. forsteri is shortlived [8], therefore may have been missed by the large gaps between sampling times in this study. Lysozyme was significantly elevated in both cohorts at week 6, despite the absence of C. forsteri in the offshore group at this time. Complement activity progressively declined in both cohorts, despite the significant increase in C. forsteri prevalence and abundance in the near shore cohort and the absence of this parasite offshore. It has been suggested humoral immune response increases with duration of ranching [23], yet this trend was not observed within this study or within other previous studies [8,13]. Previous research has found no association between Cardicola forsteri infections and performance of SBT, measured as condition index and mortality [11]. Yet, Table 3. Parasite prevalence (P) (95% confidence interval), mean intensity (I) (95% confidence interval) and mean abundance (A) (95% confidence interval) in ranched SBT at transfer and week 6 of ranching in the near shore and off shore cohorts. enhanced performance in the offshore cohort during the first few months of ranching may suggest a link which should be further investigated. There was also a reduction in mortality from week 6 to 12 of ranching, consistent with the suggestion that C. forsteri infection may be associated with mortality [8][9]13]. However, lower parasitic infections and reduced mortality in offshore SBT maybe a spurious relationship. Our results do not provide scientific evidence for the role of C. forsteri in SBT mortality or health effects due to the large number of differences between the offshore and the traditional near shore ranching environments and the limited number of sampling dates. However, this study does propose a potential role for offshore maintained SBT as a control group for future investigations into the effects of C. forsteri. A lower infection of Caligus spp. was observed within the offshore cohort. An epizootic of Caligus spp. on ranched SBT is also a common and prevalent infection [20]. Prevalence increased from 0% at transfer to 55% at week 6 in the near shore cohort, consistent with previous descriptions of ranched SBT infection [10,24]. Caligus spp. prevalence has been shown to decline from week 6 onward so that by week 18 infection was largely absent from the ranching population [10,13,24]. In contrast to Caligus infections in other farmed fishs, larval stages are rarely detected on ranched SBT, indicating Deagan's leatherjacket as an alternative source of mobile adult Caligus infections [25]. The reservoir of Caligus, Degen's leatherjacket (Thamnaconus degeni) [9], which are commonly attracted to the SBT grow-out cages during feeding [10,24,26]. These fish are benthic scavengers, and it has been suggested moving SBT into deeper water may reduce interactions between SBT and the source of the Caligus, therefore reducing infection rates [26]. It is unknown if location differences alone can explain the decline in Caligus infection as husbandry differences may also be attributed. Enhanced feeding protocols may also reduce the attractiveness of the cages to opportunistic feeding by demersal fish. There is a relationship between the mean intensity of Caligus spp. infection and severity of eye damage as well as decreased condition index [9][10]24]. The offshore cohort had both reduced prevalence and abundance of Caligus spp. and enhanced performance, i.e. condition index, although a causal link between the two findings cannot be made. Again, no differences were found between near shore and offshore maintained fish in humoral immune response, suggesting no effect of Caligus infection. Although infection intensities observed within this study were low. There were two further differences in haematology of the offshore cohort compared to the near shore cohort: osmolality and haemoglobin concentration. Elevated osmolality was observed in offshore fish at week 23. Blood osmolality is known to increase when marine fish are not osmoregulating properly, for example at times of handling and transport [27][28][29]. During SBT end of ranching harvest procedures, fish are corralled into a restricted area, the increased fish density making diver associated harvest quicker and therefore more humane for the fish. Since no changes were observed in other stress-associated parameters of blood lactate and glucose concentrations, it is likely this corralling event just prior to harvest caused the elevated osmolality and not a long-term effect of ranching site. The offshore cohort was found to maintain stable haemoglobin levels throughout the ranching season unlike the near shore cohort in which haemoglobin concentration was first elevated and then decreased between week 6 and week 23 of ranching. While the changes in haemoglobin concentration observed within this study occurred at only one time point and the magnitude may seem physiologically insignificant, previous studies have determined changing haemoglobin levels are associated with the mortality event in near shore fish [8]. Therefore the maintenance of stable haemoglobin levels in the offshore fish may be further evidence these fish were not effect by the near shore mortality event and may be further evidence of better health and wellbeing of the offshore cohort compared to the near shore cohort. Completing this study within the restrictions of commercial operations caused the experimental design to be unavoidably compromised in two ways: restricted sample size and two discrete cohorts were used for comparison. Samplle size was maximized by limiting sample time points to those previously observed to yield the greatest significance. In addition, the effects of ranching site was discussed not only in comparison to fish ranched within the traditional near shore TFZ within the same season, but with historical data collected over several years. Although different cohorts of fish may react differently to ranching [13], the parasite load, performance, and haematology observed in the offshore SBT was drastically different than expected variance between near shore maintained cohorts [8,13], therefore the importance of the findings within this study remain significant to the literature and our understanding of the effects of offshore finfish culture. This is the first time the feasibility of offshore SBT ranching has been demonstrated on a commercial scale. A reduction in mortality and a reduction in the duration of fattening required to obtain harvest condition may outweigh the increased operation costs, therefore making the move offshore economically viable. In addition, numerous benefits of offshore culture including reduced blood fluke and sealice loads and haematological health equal to or exceeding near shore maintained fish may validate moving further offshore from an animal welfare point-of-view.
6,627.6
2011-08-25T00:00:00.000
[ "Biology", "Environmental Science" ]
One end to rule them all: Non-homologous end-joining and homologous recombination at DNA double-strand breaks Double-strand breaks (DSBs) represent the most severe type of DNA damage since they can lead to genomic rearrangements, events that can initiate and promote tumorigenic processes. DSBs arise from various exogenous agents that induce two single-strand breaks at opposite locations in the DNA double helix. Such two-ended DSBs are repaired in mammalian cells by one of two conceptually different processes, non-homologous end-joining (NHEJ) and homologous recombination (HR). NHEJ has the potential to form rearrangements while HR is believed to be error-free since it uses a homologous template for repair. DSBs can also arise from single-stranded DNA lesions if they lead to replication fork collapse. Such DSBs, however, have only one end and are repaired by HR and not by NHEJ. In fact, the majority of spontaneously arising DSBs are one-ended and HR has likely evolved to repair one-ended DSBs. HR of such DSBs demands the engagement of a second break end that is generated by an approaching replication fork. This HR process can cause rearrangements if a homologous template other than the sister chromatid is used. Thus, both NHEJ and HR have the potential to form rearrangements and the proper choice between them is governed by various factors, including cell cycle phase and genomic location of the lesion. We propose that the specific requirements for repairing one-ended DSBs have shaped HR in a way which makes NHEJ the better choice for the repair of some but not all two-ended DSBs. and facilitates the recruitment of the downstream NHEJ factors XRCC4, XLF, and DNA ligase IV, which mediate the rejoining process. 3 Unlike NHEJ, HR uses homologous sequences elsewhere in the genome to retrieve sequence information that was lost at the break site. HR starts with resection of the break ends, leading to RPA-coated singlestranded overhangs. Brca2 subsequently replaces RPA with Rad51 to form what is called a Rad51 nucleoprotein filament. Such a filament can pair to homologous sequences somewhere else in the genome and form a displacement loop (D-loop) followed by DNA repair synthesis to retrieve sequence information from the donor. During this process, a joint molecule between the broken DNA and the donor homologous template is formed. Different subpathways of HR exist and separate such joint molecules through distinct mechanisms. 2,4,5 One-ended breaks, in contrast to two-ended DSBs, arise from replication problems. This can involve stalling of a fork at a replication-blocking lesion followed by replication fork collapse and breakage of one of the two sister chromatids formed behind the replication fork. A one-ended break can also arise when the replication machinery encounters a single-strand break and, upon unwinding of the DNA at the fork site, causes the disconnection of one of the two chromatids. 2,[6][7][8] The repair of one-ended breaks is arguably more difficult than the repair of two-ended DSBs and represents a particular challenge for the mechanisms devoted to maintaining genome stability. Since a normal replication fork cannot be rebuilt at a collapsed or broken replication site, the classical semi-conservative mode of DNA replication cannot proceed. Instead, cells are able to employ a specialized HR subpathway termed break-induced replication (BIR) to resume DNA replication. [9][10][11] This process involves the annealing of a broken end containing a single-stranded overhang to a single-stranded gap on the unbroken molecule. This step can be considered conceptually analogous to the Rad51-mediated step of D-loop formation during classical HR but appears to involve the strand annealing factor Rad52 instead of a Rad51 filament. 12 Replication is resumed by a conservative mode of DNA synthesis where one chromatid contains both newly synthesized strands. 13,14 In addition to BIR, one-ended DSBs can also be repaired by classical HR and possibly even by end-joining pathways if the second end is generated by an approaching replication fork. 15 This, however, would need regulatory mechanisms to temporally coordinate the repair process with the progression of the cell cycle. Here, we discuss recent findings of how cells regulate the processes of NHEJ and HR at two-ended DSBs and elaborate on ideas about repair pathway usage at one-ended DSBs. NHEJ and HR both repair two-ended DSBs The pathways for repairing two-ended DSBs are best studied by analyzing cells maintained in G1 or G2 during repair since this prevents the formation of one-ended DSBs during replication. 16 Earlier studies with confluent cell cultures revealed that IR-induced DSBs are repaired with two-component kinetics, involving a fast process within the first few hours followed by a slower process extending over many hours after damage induction. 17 The analysis of mutant cells showed that both processes require the classical NHEJ factors but the slow process additionally involves the factors ATM, Artemis and proteins locating to γH2AX foci. 18,19 Subsequent studies uncovered that the slow repair process requires ATM-mediated chromatin remodeling and involves a limited amount of end-resection. 20,21 This resection step in G1/G0 cells utilizes some of the same resection factors employed during HR but has distinct features that allow the rejoining of the break ends by the NHEJ machinery. 22, 23 The fast and the slow NHEJ process also differ in their risk to form genomic rearrangements from the joining of incorrect break ends. While rearrangements occur fairly infrequently during the fast process, they arise about 5-fold more often from the slow process. 22, 24 The higher propensity to form rearrangements likely results from the increased mobility of slowly repairing DSBs. 25 It is currently unknown why some breaks are repaired by fast NHEJ without resection while others undergo resection and slow NHEJ. The requirement for ATM suggests that the chromatin environment is an important factor but the chemical complexity of a DSB also favors slow over fast NHEJ. 18,26 Perhaps the most intuitive model is that cells first try to repair DSBs fast and without major end-modifications and only employ a more sophisticated resection program if the breaks reside in specific genomic locations or contain chemical end-structures that preclude fast repair ( Figure 1). In contrast to G1, cells irradiated in G2 employ HR in addition to NHEJ for the repair of two-ended DSBs. This suggests that a sister chromatid serves as the template for repair during HR, as opposed to the homologous chromosome which is also present in G1 cells where repair exclusively proceeds by NHEJ. Repair in G2 exhibits similar two-component kinetics as in G1 where the fast repair process also involves the classical NHEJ factors but the slow process represents HR ( Figure 1). 27 This slow process also requires ATM-mediated chromatin remodeling and a resection step involving Artemis but orchestrates it in a manner compatible with the formation of a Rad51 nucleoprotein filament, a prerequisite for homology search and HR. 28 Collectively, this analysis showed that NHEJ without resection constitutes the fast repair process both in G1 and G2 and repairs the majority of IR-induced DSBs. We have termed this pathway "resectionindependent NHEJ". The slow process involves resection in both cell cycle phases, albeit to a different degree and regulated in a different manner. We have termed the slow NHEJ process in G1 "resection-dependent NHEJ" (Figure 1). 3,29 Why NHEJ is utilized not only in G1 but also in G2 when a sister chromatid can serve as a template for repair by HR is an open question which is discussed below. HR repairs one-ended DSBs but end-joining can do so too The repair of a one-ended DSB represents a particular challenge since both major DSB repair pathways, NHEJ and HR, rely on connecting two break ends, either without or with the potential to restore the sequence information that was lost at the break site. Moreover, broken replication forks, from which one-ended DSBs arise, cannot be rebuilt to resume replication since the required components such as the replication pre-initiation complex are no longer available. 15,30 It appears that lower, and to a certain extent also higher, eukaryotes have solved this problem by the development of BIR (Figure 1). [9][10][11] How this process terminates in mammalian cells is largely unknown but likely involves the encounter of the BIR site with an approaching replication fork. At the site of such an encounter, both replication structures might converge and form a complex that needs processing to finalize the repair process. It is possible that this complex has similarities with the joint molecule structures arising during the classical HR pathway(s). In any case, the process of BIR, if not extended to the end of the chromosome, 31 would entail the involvement of a second replication site and possibly the generation of a second break end. Thus, the processes for repairing one-ended DSBs might encompass many of the same concepts that apply to the repair pathways for two-ended DSBs. Insight into DSB repair pathway usage at replication-associated one-ended DSBs largely comes from studies with genotoxic agents inducing base damages or single-strand breaks. Such BJR One End to Rule Them All lesions, if encountered by the replication fork, can generate oneended DSBs upon stalling and collapse of the forks. 6 Indeed, treatment with the alkylating agent methyl methanesulfonate causes DSBs during replication whose repair depends on the HR pathway. 32 Likewise, chromosome aberration formation is substantially enhanced in HR mutants compared with wt cells or NHEJ mutants. 33 The predominant role of HR for repairing oneended DSBs is further demonstrated by the exquisite sensitivity of HR mutants to a variety of agents that induce single-stranded DNA lesions, including the topoisomerase I inhibitor camptothecin. [34][35][36] Finally, the majority of spontaneous DSBs arise at replication forks, likely from endogenously arising singlestranded DNA lesion, and necessitate a functional HR pathway to provide cell survival. 32 Thus, the prevailing evidence suggests that HR represents the predominant pathway for repairing one-ended DSBs. 2,5 However, it is important to note that an alternative NHEJ (alt-NHEJ) pathway dependent on polymerase θ (Polθ) can also repair resected DSBs in the absence of HR and is important for cell survival in HR mutant tumor cells. 37 Collectively, this suggests that one-ended DSBs arising at replication forks are converted into two-ended DSBs that are predominantly repaired by HR with alt-NHEJ serving as a backup pathway in the absence of functional HR (Figure 1). Regulating HR to prevent end-joining of resected breaks The two main subpathways of HR are synthesis-dependent strand annealing (SDSA) and a pathway involving the formation of double Holliday junctions, henceforth referred to as the dHJ pathway. 4 Following D-loop formation and DNA repair synthesis, SDSA proceeds by the displacement of the synthesized Figure 1. DSB repair pathways throughout the cell cycle. The majority of two-ended DSBs in G1 phase are repaired by the fast process of resection-independent NHEJ. The remaining DSBs undergo limited end-resection, allowing slow repair by resectiondependent NHEJ. In S phase, one-ended DSBs arise from replication problems and can be repaired by the specialized HR subpathway BIR. Arguably more often, however, a second break end is generated by an approaching replication fork, converting the one-ended into a two-ended DSB that can be repaired by classical HR pathways with Polθ-dependent alt-NHEJ serving as a backup pathway in the absence of functional HR. The BIR process might also involve the engagement of a second break end. Similar to G1 phase, the majority of two-ended DSBs in G2 are repaired by resection-independent NHEJ. However, resection of the remaining DSBs is extensive allowing repair by HR. strand from the donor molecule and annealing with the second DSB end that did not engage in homology search and strand invasion (Figure 2). 38 The dHJ pathway, in contrast, involves the annealing of the second, non-invading DSB end to the D-loop, a step called second-end capture. This forms a structure that has been suggested to represent two crossing points between the two participating molecules that are termed Holliday junctions. Upon resolution of these junctions, cross-overs (COs) between the molecules can arise. 38,39 In case a DSB is repaired by HR using a sister chromatid as a template, such COs will manifest as sister chromatid exchanges (SCEs) which are genetically neutral since both sister chromatids contain the exact same genetic Figure 2. HR at one-ended DSBs. One-ended DSBs are repaired by HR processes, likely using a second break end generated from an approaching replication fork. HR is initiated by Rad51-mediated base pairing of the resected break end to a homologous sequence. For DNA repair synthesis and HR to proceed, Rad51 needs to be removed by Rad54, a step which is postponed until G2 phase due to the G2-specific activation of Rad54 by Nek1. This ensures that DNA repair synthesis starts at a time when the second break end is available. The dHJ subpathway of HR involves second-end capture before processing the joint molecules, providing an intrinsic feature to control for the availability of a second break end. SDSA, in contrast, involves the displacement of the synthesized strand from the homologous donor, a step which could occur before the second end is created and bears the risk to join break ends from different DSBs. Joining incorrect break ends can also occur if one-ended DSBs are repaired by alt-NHEJ or by BIR processes that are aborted before a second break end is available. The choice between the dHJ pathway and SDSA is regulated by the chromatin remodeler ATRX. BJR One End to Rule Them All information ( Figure 2). 4,40 However, if HR involves the homologous chromosome or a homologous sequence on a heterologous chromosome, a CO event leads to loss of heterozygosity or the formation of chromosomal rearrangements. 5,38 Thus, it has been suggested that cells limit the dHJ pathway to meiosis when recombination between the homologous chromosomes is desirable and employ SDSA for the repair of DSBs arising in mitotically growing cells. 4 We have recently shown that the chromatin remodeler ATRX promotes an HR subpathway that involves extended DNA repair synthesis and the formation of SCEs, suggesting that this pathway represents the dHJ process. Indeed, ATRX limits the usage of SDSA which has been suggested to involve only short patches of DNA repair synthesis and no SCE formation. 41,42 So why do cells use an HR subpathway which forms COs when SDSA appears to be the safer means? An answer to this question may lie in the consideration that spontaneously arising DSBs harbor only one end. As discussed above, it is likely that the second break end needed for repair by HR will be generated from an approaching replication fork. It might therefore be beneficial for a cell to employ an HR pathway which involves second-end capture before the joint molecule between the broken DNA and the donor homologous template is processed. This would not be the case for SDSA where displacement of the synthesized strand from the donor molecule likely occurs irrespective of the availability of a second end. Indeed, strand displacement in the absence of a second break end harbors the risk of annealing oneended DSBs from different genomic regions, resulting in deleterious genomic rearrangements ( Figure 2). Thus, we suggest that one reason for the preferential usage of the dHJ pathway over SDSA might be that HR has evolved to repair one-ended DSBs at collapsed replication forks where premature displacement of the synthesized strand carries the risk of rearrangement formation. If processing of the joint molecule follows on a second-end capture step, as is the case for the dHJ pathway, rearrangement formation is limited. This advantage appears to come at the costs of forming COs which, however, are genetically neutral as long as HR is restricted to the sister chromatid and does not involve another chromosome (Figure 2). 5 Another finding about the regulation of HR might also be viewed in the context of HR having evolved to repair one-ended DSBs. As introduced above, HR involves the formation of a Rad51 nucleoprotein filament pairing to its homologous template. For DNA repair synthesis to start, Rad51 is removed with the help of Rad54. [43][44][45] We recently showed that this function of Rad54 requires its phosphorylation by Nek1 which, unexpectedly, occurs in the late G2 phase of the cell cycle even if DSBs arise during S phase. 46 We suggest that this delay of DNA repair . NHEJ and HR at two-ended DSBs in G2. Two-ended DSBs arising in G2 phase can be repaired by fast resectionindependent NHEJ and slow HR. NHEJ is potentially error-prone since it cannot reconstitute sequence information that is lost at the break site. HR is error-free if the homologous sequence on the sister chromatid is used. However, HR can cause COs which are deleterious if it involves a homologous template other than the sister chromatid or if the sister chromatid is used off-frame, e.g. at repetitive sequences. Thus, both processes are potentially error-prone. Moreover, the G2/M checkpoint which serves to provide time for repair is negligent and allows cells to progress into mitosis with unrepaired DSBs. Thus, it might be beneficial for a cell to use fast NHEJ instead of slow HR to repair as many breaks as possible. BJR Ensminger and Löbrich synthesis serves to postpone later HR stages until a second break end has been generated from an approaching replication fork. This provides the possibility for second-end capture and minimizes the chances for strand displacement and rearrangement formation in the absence of a second end ( Figure 2). Collectively, recent findings suggest that the process of HR may have evolved to repair one-ended DSBs. Such lesions are likely converted into two-ended DSBs by approaching replication forks and HR appears to be regulated to minimize the potential for joining break ends from different DSBs. Such regulatory mechanisms include the delay in DNA repair synthesis until very late phases of the cell cycle and a second-end capture step prior to the processing of the joint molecules. 41,42,46 The second-end capture step likely leads to the formation of dHJs which, upon resolution, can form COs. Since COs can be deleterious if a repair template other than the sister chromatid is used, this provides another explanation of why HR is restricted to post-replicative cell cycle phases. 5 So why not always use HR in G2? As outlined above, both NHEJ and HR repair two-ended DSBs that arise in G2 phase, where NHEJ represents the fast and HR the slow repair component. 27,28 Our understanding about the regulatory mechanisms of HR might help to answer the question of why HR is not exclusively used for such lesions. The necessity to temporally coordinate HR with the generation of a second break end at a collapsed replication fork requires the delay of DNA repair synthesis until very late phases of the cell cycle. 46 Moreover, second-end capture is employed to prevent the premature processing of joint molecules before a second break end is available. This results in the formation of dHJs which are resolved in a manner generating COs. Thus, despite being considered error-free, HR has the potential to generate rearrangements if a template other than the sister chromatid is used or if the sister chromatid is used off-frame at repetitive regions. 38 NHEJ, on the other hand, is likely to join correct break ends if employed quickly after DSB induction and might only carry a significant risk for joining incorrect ends for DSBs that are refractory to fast repair. 22,24 Moreover, since the G2/M checkpoint is negligent and allows the progression of cells with unrepaired DSBs into mitosis, [47][48][49] it might simply be the better option to repair as many DSBs as possible by fast NHEJ before engaging into the slower HR process which also has its limitations ( Figure 3). conclusion The intricate choice between employing NHEJ or HR for DSB repair might be largely governed by the distinct risks of these pathways to form genomic rearrangements. NHEJ mostly rejoins correct break ends but rearrangements can arise from this pathway, particularly from the slow resection-dependent NHEJ process. HR, in contrast, is often regarded as being error-free. However, the consideration that the repair of one-ended DSBs demands an HR subpathway that engages a second break end and forms COs reveals the limitations of HR. CO formation is genetically neutral if the homologous template on the sister chromatid is used but leads to rearrangements in all other cases. This explains why cells favor resection-dependent NHEJ over HR in G1 when no sister chromatid is available. It also elucidates why the fast resection-independent NHEJ process is employed together with HR in G2, particularly in light of the short duration of this cell cycle phase and the negligence of the G2/M checkpoint.
4,772.6
2020-02-28T00:00:00.000
[ "Biology" ]
Effect of Scattering Correction in Neutron Imaging of Hydrogenous Samples using the Black Body Approach . The “black body” (BB) method is an experimental approach aiming at correcting scattering artifacts and systematic biases from neutron imaging experiments. It is based on the acquisition of reference images, obtained with an interposed grid of neutron absorbers (BB), from which the background including contaminations of scattering from the sample can be extrapolated. We evaluate in this paper the effect of the BB correction on two experimental datasets acquired with different setups at the NEUTRA and ICON beamlines at the Paul Scherrer Institut. With the two experiments we demonstrate the efficient utilization of the method for 2D as well as 3D data and in particular for kinetic studies. In the first dataset, differently varnished wood samples are studied through time resolved kinetic neutron radiography to evaluate the change in wood moisture content due to changes in relative humidity. In the second case study, engineered soil sample simulating a small experimental bioretention cell with rainfall, also known as rain garden, is imaged through on-the-fly neutron tomography. Introduction Together with spectral influence (beam hardening) and effects at pronounced edges, scattering from the sample and the detection system poses the biggest challenge for quantitative measurements of the linear attenuation coefficients in neutron imaging experiments with high spatial resolution. We have recently introduced an efficient method and the corresponding data treatment for scattering correction that, compared to earlier attempts, has the advantage in neither requiring prior knowledge of the neutron spectrum nor of the sample composition [1,2].The method is based on the acquisition of additional reference images with an interposed grid of neutron absorbers, called "black bodies" (BB) which lend the method its name: BB correction.The main idea of the approach is that the signal measured behind a black body can be interpreted as the additive background of scattering components. Here we present exemplary applications of the method to different imaging experiments and we discuss the effect on relevant quantifications.We consider two experiments that were performed at the Paul Scherrer Institut (PSI).In the first one, a collection of wood samples with different varnishing treatments are studied through kinetic neutron radiography in order to evaluate the changes in wood moisture content due to changes in relative humidity.In the second case study, an engineered soil sample simulating a small experimental bioretention cell with rainfall, also known as rain garden, is imaged.First, we briefly describe the type of images which are required for the BB correction, then we present the experimental setup for the two study cases and finally we show the effect of the applied correction. BB correction The BB correction is an experimental approach with the aim of mitigating artifacts due to systematic biases through scattering components [1,2].Such artifacts consist of an increased transmission signal, often especially in the center of a bulk sample, resulting in a bias towards lower computed attenuation coefficients and "cupping type" effects in tomographic reconstructions, i.e. radially decreasing attenuation coefficients towards the center. Additional images are required for the BB correction: the first one is the open beam with the BB grid (BB-OB), the second is with the sample and the BB grid in the beam (BB-S).From the BB-OB images, the systematic biases that are due to scattering by the experimental apparatus can be estimated through extrapolation between the black bodies in the grid.From the BB-S, the scattering contributions of the sample can be evaluated.Dedicated image processing was developed to interpolate for each pixel position of the field of view the neutron flux measured at the BB positions, thus estimating the background and scattering images [2,3]. Depending on the experiment type, BB-S images can and have to be acquired at different time steps during a full experimental run.In case of kinetic radiography studies, BB-S images can be acquired prior to and/or after the non-BB images, depending on whether the sample changes sufficiently during the measurement process that it affects the scattering background.Depending on the study, correspondingly, BB-S images may need to be interleaved with the non-BB images.For tomography, a sparse tomography with regular angular steps with interposed BB is generally recommended, with a number of projections on the order of the square root of the number of projections of the conventional tomography.In case of a highly symmetrical sample, for example cylindrical, this scheme can be furthermore relaxed, and a set of BB-S images can be acquired at the same tomographic angle before and/or after the tomographic scan. Case 1: Wood The aim of the first experiment is to study the development of moisture content in wooden musical instruments.Eight spruce wood samples with a varnished surface were produced, in order to reproduce violin characteristics.Sample dimensions of each block were 10x10x50 mm 3 .The lateral surfaces were sealed to ensure that the origin of the sorption process was limited to the top and bottom surfaces of each sample. The aim of the specific experiment was the analysis of the time dependent moisture content (MC) distribution over the wood cross section.In a climate chamber [4], the samples were put at a controlled temperature of 20°C, with relative humidity initially set at 35%.While keeping the temperature constant, relative humidity was raised up to 95%.After 20 min, the RH reached the 95% level, then the RH was kept constant at 95% for 5 h.The RH was reduced back to 35% again to be kept constant for another 5 h.During this process and within this sample environment, time resolved radiographs were obtained at the thermal neutron imaging beamline NEUTRA [5] at the neutron source SINQ at PSI using a scintillator/camera detector system with a field-of-view (FOV) of 150×150mm 2 .The scintillator used was a 50µm thick LiF/ZnS based screen.The camera was an Andor Neo sCMOS, where the optics was set to a FOV of 161x136 mm and 2560x2160 pixel chip resulting in an effective pixel size of 63.1µm.Each image in the time series featured 15 s exposure time, with a time increment chosen to be 5 min.The total experiment duration for a single set of samples was 10.5 h during which 130 radiographs were acquired. Open beam and sample images with a BB grid were obtained at the beginning of the experiment, with RH of 35%.The sample scattering and background were extrapolated and subtracted from the radiographs through an ImageJ plugin implementing the BB correction.From the normalized images, the change in wood MC at each time step was computed as: where ℎ and are the densities of water and oven dry wood, is the oven dry wood thickness and finally the difference in water thickness is expressed as Δ ℎ = −ln ( 0 ⁄ )/Σ ℎ , with Σ ℎ being the attenuation coefficient for water with respect to the NEUTRA spectrum, and T t and T 0 the transmission images in the wood regions for each time steps t and for the reference time 0 (when RH=35%), respectively. Case 2: Soil A soil sample was taken from a filter layer of a test bed simulating a bioretention cell, also called rain garden, which is a low-impact development construction that accumulates, infiltrates and treats storm water.The sample was composed by 50% of sand, 20% of topsoil and 30% of compost.The resulting soil mixture contained 12% mass fraction of particles smaller than 2 µm, 14% mass fraction of particles sized between 2 and 50 µm and 74% mass fraction of particles sized between 50 and 2000 µm.The particle density of the soil was 2563 kg/m -3 . The sample was imaged at the cold neutron imaging beamline ICON [6] at PSI using a 100µm thick LiF/ZnS based scintillator coupled with an Andor Neo sCMOS camera with a FOV of 40×40 mm 2 and effective pixel size of 68 µm.During imaging, rainfall episodes were simulated by constant flux using heavy water as flowing fluid.Fluid drainage happened by gravity, while the inflow and outflow of fluid were balance monitored.As the drying and wetting are fast processes, the imaging technique has to be fast enough to capture the water flow.On-the-fly tomographies [7] were acquired with a continuous turning of 360°/min.Four 360° turns with 300 projections for each complete turn were performed with individual projection exposure times of 0.2 s (rate: 5 fps) and a corresponding neutron dose per projection of about 100 neutrons/pixel.BB images were also acquired on-the-fly at the beginning and at the end of the experiment, thus corresponding to the dry and wet conditions.CT reconstruction featuring BB correction was done in our in-house open source software MuhRec [2,3]. Results Case 1: Wood Fig. 1 shows the results of image data processing for the wood samples with and without BB correction in the two cases of minimal and maximal RH, i.e. 35% and 95%, respectively, obtained at time 0 and after 5h.In both cases, the effect of correction is clearly visible, with lower sample transmission values for the images normalized with BB correction.The extracted change in water mass within the wood samples at each time step is shown in Figure 2. Without BB correction, the mass content is underestimated up to 30% at the point of higher MC (95%, after 5 h).These results are confirmed by the measured variation of water mass before and after the experiment with a precision balance (circle in the picture), representing the reference measure.Radial mean values of attenuation coefficients plotted in Fig. 4 show that correction with the BB approach results in higher attenuation coefficient (between 5 and 14%) for all radial position.A cupping effect does not appear prominent in the results, due to the small sample size and the in-homogeneity of the sample.However, when plotting the percent difference between the radial mean values obtained without and with BB correction (fig 4, right panels) with respect to the radial distance from the sample center, a pronounced decrease in difference when moving away from the sample center, as typical of a cupping effect, can be observed and as expected more pronounced for the wet condition. Conclusions We have presented the beneficial effect of the BB approach for scattering and systematic bias correction in two experimental datasets.In both cases, the additive background that results in underestimated attenuation coefficient appears to be mitigated through the BB correction.In the time resolved radiographic study of wood samples, uncorrected images resulted in clear underestimation of water mass, while the BB corrected measurements resulted in good agreement with an independent reference measurement.For the on-the-fly tomographies, the BB correction proved effective in successfully compensating cupping, resulting from scattering bias, even though the effect was weak and not easy to identify without the BB approach.The effect was found to contribute about 5% in mean attenuation coefficient error and with higher significance for the wet condition.These results demonstrate that the BB correction is well applicable to time resolved kinetic neutron imaging studies in 2D as well as in 3D.As hydrogenous materials are strong incoherent neutron scatterers, this type of correction is indispensable for sensitive quantitative studies in neutron imaging, for example when the aim is to quantify the amount of water. Figure 3 :Figure 4 : Figure 3: Reconstructed CT of the rain garden sample at dry and wet conditions without and with BB correction
2,601.2
2020-01-05T00:00:00.000
[ "Physics" ]
Matter Fields and Non-Abelian Gauge Fields Localized on Walls Massless matter fields and non-Abelian gauge fields are localized on domain walls in a (4+1)-dimensional $U(N)_c$ gauge theory with $SU(N)_{L}\times SU(N)_{R}\times U(1)_{A}$ flavor symmetry. We also introduce $SU(N)_{L+R}$ flavor gauge fields and a scalar-field-dependent gauge coupling, which provides massless non-Abelian gauge fields localized on the wall. We find a chiral Lagrangian interacting minimally with the non-Abelian gauge field together with nonlinear interactions of moduli fields as the (3+1)-dimensional effective field theory up to the second order of derivatives. Our result provides a step towards a realistic model building of brane-world scenario using topological solitons. §1. Introduction Gauge hierarchy problem is a good guiding principle to construct theories beyond the Standard Model (SM). Brane world scenario 1), 2), 3) is one of the most attractive proposals to solve this problem, besides models with supersymmetry (SUSY). 4) In the brane world scenario, it is assumed that all fields except the graviton field are localized on (3+1)-dimensional world volume of a defect called 3-brane, immersed in a many-dimensional space-time called bulk. In order to realize such a scenario dynamically, we may use a topological soliton. For instance, let us consider a domain wall solution as the simplest soliton. To obtain (3+1)-dimensional world volume on the domain wall, we need to consider a theory in a (4+1)-dimensional space-time. Bulk fields in (4+1)-dimensions can provide massless modes localized on the domain wall, besides many massive modes in general. After integrating over massive modes, one obtains low-energy effective field theory describing the effective interactions of massless modes. Massless matter fields have been successfully localized on domain walls, 5) but localization of the gauge field on domain walls in field theories has been difficult. 6) It has been noted that the broken gauge symmetry in the bulk outside of the soliton inevitably makes the localized gauge field massive with the mass of the order of inverse width of the wall. 7), 8) To localize a massless gauge field, one needs to have the confining phase rather than the Higgs phase in the bulk outside of the soliton. Earlier attempts used a tensor multiplet in order to implement Higgs phase in the dual picture, but this approach successfully localize only U(1) gauge field. 9) More recently, a classical realization of the confinement 10), 11) through the position-dependent gauge coupling has been successfully applied to localize the non-Abelian gauge field on domain walls. 12) The nontrivial profile of this position-dependent gauge coupling was naturally introduced on the domain wall background through a scalar-field-dependent gauge coupling function resulting from a cubic prepotential of supersymmetric gauge theories. The appropriate profile of the position-dependent gauge coupling was obtained from domain wall solutions using two copies of the simplest model or from a model with less fields and a particular mass assignment. However, it was still a challenge to introduce matter fields in nontrivial representations of the gauge group of the localized gauge field. Parameters of soliton solutions are called moduli and can be promoted to fields on the world volume of the soliton. Massless fields in the low-energy effective field theory on the soliton background are generally given by these moduli fields. Moduli with non-Abelian global symmetry is often called the non-Abelian cloud, and has been explicitly realized in the case of domain walls using Higgs scalar fields with degenerate masses in U(N) c gauge theories. 13) This model also has a non-Abelian global symmetry SU(N) L ×SU(N) R ×U(1) A , 2 which is somewhat similar to the chiral symmetry of QCD. If we turn this global symmetry into a local gauge symmetry, we should be able to obtain the usual minimal gauge coupling between these moduli fields and the gauge field. Since we wish to localize the gauge field on the domain wall, it is essential to choose the global symmetry of moduli fields to be unbroken in the vacua (of both left and right bulk outside of the wall). This choice will guarantee that the bulk outside of the domain wall is not in the Higgs phase. Therefore we are led to an idea where we introduce gauge fields corresponding to a flavor symmetry group of scalar fields which will be unbroken in the vacuum. If we introduce the additional scalar-field-dependent gauge coupling function similarly to the supersymmetric model, we should be able to localize both massless matter fields and the massless gauge field at the same time on the domain wall. The purpose of this paper is to present a (4+1)-dimensional field theory model of localized massless matter fields minimally coupled to the non-Abelian gauge field which is also localized on the domain wall with the (3+1)-dimensional world volume. We also derive the low-energy effective field theory of these localized matter and gauge fields. To introduce non-Abelian flavor symmetry (to be gauged eventually) in the domain wall sector, we replace one of the two copies of the U(1) c gauge theory with the flavor symmetry U(1) L × U(1) R in Ref. 13 for the (subgroup of) the flavor SU(N) L+R symmetry. In order to obtain the field-dependent gauge coupling function, for the gauge field localization mechanism, 12) we also introduce a coupling between a scalar field and gauge field strengths inspired by supersymmetric gauge theories, although we do not make the model fully supersymmetric at present. This scalarfield-dependent gauge coupling function gives appropriate profile of position-dependent gauge coupling through the background domain wall solution. With this localization mechanism for gauge field, we find massless non-Abelian gauge fields localized on the domain wall. We also obtain the low-energy effective field theory describing the massless matter fields in the non-trivial representation of non-Abelian gauge symmetry. Since our flavor symmetry resembles the chiral symmetry of QCD before introducing the gauge fields that are localized, we naturally obtain a kind of chiral Lagrangian as the effective field theory on the domain wall. We find an explicit form of full nonlinear interactions of moduli fields up to the second order of derivatives. Moreover, these moduli fields are found to interact with SU(N) L+R flavor gauge fields as adjoint representations. In analyzing the model, we use mostly the strong coupling limit for the domain wall sector. The strong coupling is merely to describe our result explicitly at every stage. Even if we do not use the strong coupling, the physical features are unchanged. It is easy to expect that (the part of) the gauge symmetry is broken when the walls separate in each copy of the domain wall sector. Our results of the lowenergy effective field theories shows that flavor gauge symmetry SU(N) L+R is broken on the non-coincident wall and the associated gauge bosons acquire masses as walls separate. This geometrical Higgs mechanism is quite similar to D-brane systems in superstring theory. So our domain wall system provides a genuine prototype of field theoretical D3-branes. This is an interesting problem, which we plan to analyze more in future. We also find indications that additional moduli will appear in the supersymmetric version of our model, which is also an interesting future problem to study. The organization of the paper is as follows. In section 2, we explain the localization mechanism by taking Abelian gauge theory as an illustrative example. In section 3, we introduce the chiral model with the non-Abelian flavor symmetry for the domain wall sector and then also introduce gauge fields for the unbroken part of the flavor symmetry. By introducing the scalar-field-dependent gauge coupling function, we arrive at the localized massless gauge field interacting with the massless matter field in a nontrivial representation of flavor gauge group. The low-energy effective field theory is also worked out. In section 4, an attempt is made to make the model supersymmetric. New additional features of the supersymmetric models are also described. In section 5, we summarize our results and discuss remaining issues and future directions. In Appendix A we discuss domain wall solution for gauged massive CP 1 sigma model. Appendix B describes derivation of effective Lagrangian which includes full nonlinear interactions between moduli fields. Appendix C contains derivation of positivity condition for the potential appearing in section 4. §2. Abelian-Higgs model of gauge field localization The domain wall sector Let us illustrate the localization mechanism for the gauge fields and the matter fields on the domain walls by using a simplest model in (4+1)-dimensional spacetime : two copies (i = 1, 2) of U(1) models, each of which has two flavors (L, R) of charged Higgs scalar fields H i = (H iL , H iR ) : We use the metric η M N = diag(+, −, · · · , −), M, N = 0, 1, · · · , 4. The Higgs field H i is charged with respect to the U(1) i gauge symmetry and the covariant derivative is given by where w i M is the U(1) i gauge field with the field strength Since we want domain walls, we will choose resulting in the U(1) iA flavor symmetry * ) . We have included the neutral scalar fields σ i in this Abelian-Higgs model. The gauge coupling g i appears not only in front of the kinetic terms of the gauge fields and σ i , but also as the the quartic coupling constant of H i . Both these features are motivated by the supersymmetry. Indeed, we can embed this bosonic Lagrangian into a supersymmetric model with eight supercharges by adding appropriate fermions and bosons, which will not play a role to obtain domain wall solutions. We have taken this special relation among the coupling constants only to simplify concrete computations below. One may repeat the following procedure in models with more generic coupling constants without changing essential results. The first term of the potential is the wine-bottle type and the Higgs fields develop nonzero vacuum expectation values. There are two discrete vacua for each copy i Thanks to the special choice of the coupling constants in L i motivated by the supersymmetry, there are Bogomol'nyi-Prasad-Sommerfield (BPS) domain wall solutions in these models. Let y be the coordinate of the direction orthogonal to the domain wall and we assume all the field depend on only y. Then, as usual, the Hamiltonian can be written as follows Thus the Hamiltonian is bounded from below. This bound is called Bogomol'nyi bound, and is saturated when the following BPS equations are satisfied In order to obtain the domain wall solution interpolating the two vacua in Eq. (2 . 6), we impose the boundary conditions : Tension T i of the domain wall is given by a topological charge as The second equation of the BPS equations (2 . 8) can be solved by the moduli matrix For a given H i0 , the scalar function ψ i is determined by the master equation The asymptotic behavior of the field ψ i is determined by the condition that the configuration reaches the vacuum at left and right infinities: There exists redundancy in the decomposition in Eq. (2 . 11), which is called the V -transformation: (2 . 14) For example, a single domain wall solution centered at y = 0 can be generated by a moduli matrix Then the master equation is No analytic solutions for the master equation have been found for finite gauge couplings g i , so we must solve it numerically. The corresponding solution is shown in Fig. 1. The generic solutions of the domain wall are generated by the generic moduli matrices (after fixing the V -transformation) The complex constants C iL , C iR are free parameters containing the moduli parameters of the BPS solutions. The moduli parameter can be defined by The other degree of freedom in C iL , C iR can be eliminated by the V -transformation in Eq. (2 . 14) and has no physical meaning. Then the master equation is found to be It is obvious that the real parameter y i is the translational moduli of the domain wall. The other parameter α i is an internal moduli which is the Nambu-Goldstone (NG) mode associated with the U(1) iA flavor symmetry spontaneously broken by the domain walls. One can take, if one wishes, the strong gauge coupling limit of the Lagrangian L i . As is well-known, the U(1) gauge theory with two flavors of Higgs scalars in the strong gauge coupling limit becomes a non-linear sigma model whose target space is CP 1 : The gauge fields and the neutral scalar field become infinitely massive and lose their kinetic terms. They are mere Lagrange multipliers in the limit, and are solved as Plugging these into L i , we get with a projection operator Let us introduce an inhomogeneous coordinate φ i of CP 1 by Then the Lagrangian of the CP 1 model in terms of φ i is Let us reconsider the domain wall solutions in this limit. The Hamiltonian can be written as (2 . 26) The BPS equation and the boundary conditions are given by The tension of the domain wall is This is the same as the one in the finite gauge coupling model. In this way, the strong gauge coupling limit has a great advantage compared to the finite gauge coupling case. One can exactly solve the BPS equation and see the moduli parameter in the analytic solutions. Furthermore, there is no important differences between domain wall 8 solutions in the finite coupling (Abelian-Higgs model) and the strong coupling (non-linear sigma model). Both solutions have the same tension of domain wall and the same number of the moduli parameters. To see the difference explicitly, let us compare the configuration of the neutral scalar field σ i . In the strong gauge coupling limit, it can be written as where we have used In Fig. 1, we show the configurations of σ i in two cases, the one in the small finite gauge coupling and the one in the strong gauge coupling limit. As can be seen from the figure, there are no significant differences. Let us next derive the low energy effective theory on the domain wall. We integrate all the massive modes while keeping the massless modes. We use the so-called moduli approximation where the dependence on (3+1)-dimensional spacetime coordinates comes into the effective Lagrangian only through the moduli fields: (2 . 32) The effective Lagrangian for the moduli field C i (x µ ) can be obtained by plugging this into the Lagrangian L i and integrate it over y. This can be done explicitly as follows. With Eq. (2 . 31), the effective Lagrangian is given by where energy of soliton solution is neglected since it does not contribute to dynamics of moduli. Note that 2m i v 2 i is precisely the domain wall tension. This is the free field Lagrangian. Although we have derived this effective Lagrangian in the strong gauge coupling limit, we can obtain the same Lagrangian in the finite gauge coupling constant. In other words, the effective Lagrangian cannot distinguish the infinite versus finite coupling cases at least in the quadratic order of the derivative expansion. Localization of the Abelian gauge fields In the previous subsection, we have seen that the NG modes of the translation and U(1) global symmetry are the only massless modes in the Abelian-Higgs model. They are localized on the domain wall. There are no massless gauge field on the domain wall and all the modes contained in the gauge field are massive. The mass of the lightest mode of the gauge field is of the order of the inverse of the width of the domain wall, since the bulk outside of the domain wall is in the Higgs phase. The low energy effective Lagrangian for the massless fields is obtained after integrating out the massive modes including gauge fields. In order to obtain the massless gauge field to be localized on the domain wall, we need a new gauge symmetry which is unbroken in the bulk. Recently, a new mechanism was proposed to localize gauge fields on domain walls. 12) A key ingredient is the so-called dielectric coupling constant 10), 11) for the new gauge symmetry. To illustrate the new localization mechanism, let us introduce a new U(1) gauge field a M which we wish to localize on the domain wall. Since this gauge symmetry should be unbroken in the bulk, we consider the case where all the Higgs fields are neutral under this newly introduced U(1) gauge symmetry. The gauge field a M is assumed to couple to the neutral scalar fields σ i only in the following particular combination where a real constant λ with the unit mass dimension, in accordance with the (4+1)dimensional spacetime and the field strength is defined by The field-dependent gauge coupling function is given by except for the additional kinetic term (the last term) of the (3+1)-dimensional gauge field w µ , which is the zero mode (y-independent mode) of the (4+1)-dimensional field w µ . The (3+1)-dimensional gauge coupling constant is given by where we have used the asymptotic behavior ψ i → log 2 cosh 2m i (y − y i ) as |y| → ∞. Note that this result is again independent of the gauge couplings g i in the domain wall sector. In summary, the low energy effective Lagrangian is Now we separate the quantum fields (fluctuations) from the classical background moduli parameters by Then the effective Lagrangian up to the second order of the small quantum fluctuations is given by We note that the massless gauge field a µ has a positive finite gauge coupling squared * ) 1/(4λ(y 0 2 − y 0 1 )) provided y 0 2 − y 0 1 > 0. Although we succeeded in localizing the massless U(1) gauge field a µ on the domain walls, the Lagrangian Eq. (2 . 42) has no charged matter fields minimally coupled with the localized gauge field a µ . To obtain matter fields interacting with the localized gauge field, one may be tempted to identify the Higgs fields H i = (H iL , H iR ) as matter fields * * ) with * ) Here we are content with the fact that the positivity of the gauge kinetic term is assured at least in finite region of moduli space, instead of just at a point. However, it is possible to make a more economical model where one has less moduli, and the positivity of the gauge kinetic term is assured. 12) * * ) We consider the diagonal subgroup U (1) A of U (1) 1A and U (1) 2A . Actually the U (1) iA global symmetries are broken by the domain wall solution, we consider this gauging to leading order of gauge coupling only to illustrate the Higgs mechanism for the broken symmetry. charges (1, −1). The minimal gauge interaction of Higgs fields with the a M is introduced through the modified covariant derivatives as Since the moduli field C i is charged, the derivatives in the low energy effective theory Eq. (2 . 33) should be replaced by the covariant derivative It is straightforward task to derive the effective Lagrangian with the covariant derivative above along the same line of reasoning for the previous case This clearly shows that the new gauge field a µ is not massless due to the Higgs mechanism, and should be integrated out together with the other massive fields. Namely the low energy effective Lagrangian does not include the massless gauge fields, since the U(1) symmetry which we gauged is broken by the domain wall. A more explicit example at the strong gauge coupling limit is described in Appendix A. Thus the Abelian-Higgs model in this section gives an important lesson that we should not gauge a symmetry which is broken by the domain wall solution, since the corresponding gauge fields may be localized on the domain walls but they become massive and should be integrated out from the low energy effective theory. In the next section, we will give a model with a non-Abelian global symmetry whose unbroken subgroup can be gauged to yield massless localized gauge fields on the domain wall. §3. The chiral model In this section we study domain walls in the chiral model which is a natural extension of the Abelian-Higgs model in the previous section. This chiral model leads to two important consequences 1) massless non-Abelian gauge fields are localized on the domain wall and moreover 2) the scalar fields which are non-trivially interacting are also localized on the domain walls. 13) To localize the gauge field in a simple manner, we again introduce two sectors L 1 and L 2 , but only the former is extended to Yang-Mills-Higgs system and the latter is the same form as (2 . 1). The second sector couples to the first sector through the coupling as described in (2 . 35) after gauging the flavor symmetry it plays a role as localization of gauge fields, combined with the first sector. The matter contents are summarized in Table I. Since the presence of two factors of SU(N) global symmetry resembles the chiral symmetry of QCD, we call this Yang-Mils-Higgs system as the chiral model. The Lagrangian is then given by , and adjoint scalar as Σ 1 . The covariant derivative and the field strength are denoted as The mass matrix is given by Let us note that the chiral model reduces to the Abelian-Higgs model in the limit of N → 1, by deleting all the SU(N) groups. The second sector is just necessary to realize the field-dependent gauge coupling function similar to (2 . 35) as we will discuss in the subsequent subsection. In the rest of this subsection, we focus only on the first sector (i = 1) and suppress the index i = 1. The symmetry transformations act on the fields as There exist N + 1 vacua in which the fields develop the following VEV with r = 0, 1, 2, · · · , N. We refer these vacua with the label r. In the r-th vacuum, both the local gauge symmetry U(N) c and the global symmetry are broken, but a diagonal global symmetries are unbroken (color-flavor-locking) As in the Abelian-Higgs model, the BPS equations for the domain walls can be obtained through the Bogomol'nyi completion of the energy density with the assumption that all the fields depend on only the fifth coordinate y and W µ = 0: This bound is saturated when the following BPS equations are satisfied The tension of the domain wall is given by Let us concentrate on the domain wall which connects the 0-th vacuum at y → ∞ and the N-th vacuum at y → −∞. Its tension can be read as 14) where ψ is the solution of the master equation (2 . 12) in the Abelian-Higgs model. Eq. (3 . 8) shows that the unbroken global symmetry for N-th vacuum ( The domain wall solution further breaks these unbroken symmetries because it interpolates the two vacua. The breaking pattern by the domain wall is * ) This spontaneous breaking of the global symmetry gives NG modes on the domain wall as massless degrees of freedom valued on the coset similarly to the chiral symmetry breaking in QCD : Since our model can be embedded into a supersymmetric field theory, these NG modes (U(N) chiral fields) appear as complex scalar fields accompanied with additional N 2 pseudo-NG modes * * ) . * ) The unbroken generators of U (1) A+c for r-th vacuum contains different combination of U (N ) c generators depending on r. Therefore the right and left vacua preserve actually different U (1) A+c , and the wall solution does not preserve any of these U (1) A+c . * * ) One of them is actually a genuine NG mode corresponding to the broken translation. Localization of the matter fields In the remainder of this subsection, we will give the low-energy effective Lagrangian on the domain walls where the massless moduli fields (the matter fields) are localized. The best way to parametrize these massless moduli fields is to use the moduli matrix formalism 13), 14), 15) where S ∈ GL(N, C) and Ω = SS † is the solution of the following master equation where We have used the V -transformation to identify the moduli e φ , which is a complex N by N matrix. It can be parametrized by an N × N hermitian matrixx and a unitary matrix U where U is nothing but the U(N) chiral fields associated with the spontaneous symmetry breaking Eq. (3 . 18) andx is the pseudo-NG modes whose existence we promised above. In the strong gauge coupling limit g → ∞, solution of master equation is simply Ω = Ω 0 . After fixing the U(N) c gauge, we obtain Let us denote, for brevityŷ the Higgs fields are then given as From this solution, one can easily recognize that eigenvalues ofx correspond to the positions of the N domain walls in the y direction. Now we promote moduli parametersx and U to fields on the domain wall world volume, namely functions of world volume coordinates x µ . We plug the domain wall solutions H L,R (y;x(x µ ), U(x µ )) into the original Lagrangian L in Eq.(3 . 2) at g → ∞ and pick up the terms quadratic in the derivatives. Thus the low energy effective Lagrangian is given by Here we have eliminated the massive gauge field W µ by using the equation of motion. Using the solutions for H L and H R we have found a closed formula for the effective Lagrangian up to the second order of derivatives but with full nonlinear interactions involving moduli fieldŝ x and U. Detailed derivation is given in Appendix B. Here we exhibit the result only in the leading orders of U − 1 andx: The quantum numbers are summarized in Table II. We now introduce a field-dependent gauge coupling function g 2 (Σ) for A M , which is inspired by the supersymmetric model in Ref.12). The Lagrangian is given by The next step is to derive the low energy effective theory on the domain wall worldvolume in the moduli approximation as in the previous subsections. Again, we promote the moduli parameters as the fields on the domain wall world-volume and pick up the terms up to the quadratic order of the derivative ∂ µ . Similarly to section 3.2, we utilize the strong gauge coupling limit g i → ∞, to simplify the computation without changing the final result. Let us emphasize that we keep the field-dependent gauge coupling function e(Σ) finite. The spectrum of massless NG modes is unchanged by switching on the SU(N) L+R gauge interactions * ) . We just repeat the similar computation to those in section 3.2. Again we shall focus on the first sector L 1 and suppress the index i = 1 of fields. Since color gauge fields W µ becomes auxiliary fields and eliminated through their equations of motion, it is convenient to define the covariant derivative only for the flavor (SU(N) L+R ) gauge interactions aŝ (3 . 36) Then we obtain the effective Lagrangian of the first sector as Eliminating W µ , we obtain the following expression for the integrand of the effective Lagrangian after some simplification where we defined fields H ab with the label ab of adjoint representation of the flavor gauge group SU(N) L+R+c and the covariant derivative as In Appendix B, we will describe fully the procedure to derive the effective Lagrangian by substituting (3 . 26) and (3 . 27) and rewriting in terms of moduli fieldsx and U. Here we merely state the result: is a Lie derivative with respect to A. The covariant derivative D µ is defined by Eqs. (3 . 19) and (3 . 20) show that The complex moduli e φ is decomposed into hermitian part ex and unitary part U in Eq.(3 . 23). Since we can express e 2x = e φ e φ † , and U = e −φ ex, we find that they transform as adjoint where y i is the wall position for the i-th domain wall sector. Summarizing, we obtain the following effective Lagrangian 20 where L 2,eff is given in (2 . 34). This is the main result of this paper. We have succeeded in constructing the low energy effective theory in which the matter fields (the chiral fields) and the non-Abelian gauge fields are localized with the non-trivial interaction. We show the profile of "wave functions" of localized massless gauge field and massless matter fields as functions of the coordinate y of the extra dimension in Fig. 2. As is seen from Eq.(3 . 47), the flavor gauge symmetry SU(N) L+R+c is further (partly) broken and the corresponding gauge field A µ becomes massive, when the fluctuation φ = exU develops non-zero vacuum expectation values. Especially,x is interesting because its nonvanishing (diagonal) values of the fluctuation has the physical meaning as the separation between walls away from the coincident case. For instance, if all the walls are separated, SU(N) L+R+c is spontaneously broken to the maximal U(1) subgroup U(1) N −1 . However, if r walls are still coincident and all other walls are separated, we have an unbroken gauge symmetry SU(r) × U(1) N −r+1 . Then, a part of the pseudo-NG modesx turn to NG modes associated with the further symmetry breaking SU(N) L+R+c → SU(r) × U(1) N −r+1 , so that the total number of zero modes is preserved 13) * ) . These new NG modes, called the non-Abelian cloud, spread between the separated domain walls. 13) The flavor gauge fields eat the non-Abelian cloud and get masses which are proportional to the separation of the domain walls. This is the Higgs mechanism in our model. This geometrical understanding of the Higgs mechanism is quite similar to D-brane systems in superstring theory. So our domain wall system provides a genuine prototype of field theoretical D3-branes. §4. Embedding into supersymmetric theory A crucial point to localize gauge field around domain wall is the coupling between scalar and gauge kinetic term. Such a coupling is naturally realized in (4+1)-dimensional supersymmetric gauge theory. 12) This theory generally consists of hypermultiplet part and vector multiplet part. The latter is specified by the so-called prepotential. In (4 + 1)-dimensional theory the prepotential generally allows up to cubic terms in vector multiplets, 17) which serves interactions among vector mutiplets such as (3 . 33). Supersymmetric model In embedding the model into supersymmetric gauge theories in (4+1) dimensions, we will give non-Abelian global flavor symmetry SU(N i ) V for each copy (i = 1, 2) of the domain wall sector, instead of only one copy as in (3 . 34) of the previous section. This contains the model (3 . 34) as a limiting case of N 2 → 1, and may offer more general situation phenomenologically. To formulate supersymmetric gauge theories, we need to introduce Y i as auxiliary fields of U(N i ) c vector multiplet, and Φ i and Y i as adjoint scalar fields and auxiliary fields of SU(N i ) V vector multiplet. As bosonic fields of theories with eight supercharges, we also need to double the scalar fields H i , by introducing another setH They are in the same representations as H i under U(N i ) c and U(1) iA . Explicit charge assignments for hypermultiplets matter fields and adjoint scalar fields are summarized in Table III. The resultant supersymmetric Lagrangian is written as where where α, β · · · denote all gauge groups and their generators collectively. We label them with the ordering where 0 i denotes U(1) i parts of U(N i ) c gauge group, while I i = 1, · · · , N 2 i − 1 are color indices of SU(N i ) c and A i = 1, · · · , N 2 i − 1 denotes flavor indices of SU(N i ) V gauge group. The scalar fields Σ α and auxiliary fields Y α are explicitly written by and similarly the field strength F α M N and gauge field W α M are written by We adopt the convention of U(N i ) c and SU(N i ) V matrices such as iR . Covariant derivatives of Σ I i , Φ A i are defined as the adjoint representation. We will not display the Chern-Simons term L iCS and the fermionic term L ifermion , since we do not need them for our analysis. Functions a αβ (Σ) are gauge coupling functions, which are given as second derivative of the prepotential a αβ (Σ) = ∂ 2 a(Σ) ∂Σ α ∂Σ β . (4 . 12) From the above prepotential, we see the coupling constants of U(1) i and SU(N i ) c are given byĝ i and g i , respectively * ) . We denote the coupling function of SU(N i ) V corresponding to but will suppress the argument Σ to write e i in the following. The constants c α are coefficients of the Fayet-Iliopoulos (FI) terms, allowed to be non-zero only for the U(1) part of the gauge groups * * ) (4 . 14) We have assumed both the FI parameters c 0 1 and c 0 2 to be positive in the same direction in SU(2) R , which is chosen to be along the third component. In this setup, theH fields will vanish in the classical solution. Moreover, they do not contribute to the desired order of effective Lagrangian. Similarly we have neglected the auxiliary fields Y other than the third component in SU(2) R , and we have denoted as Y α . Hence we can call the potential after eliminating the auxiliary fields Y 's to be D-term potential. The F-term potential V iF can be worked out from the following superpotential where we restored the tilde fieldsH's to facilitate writing the superpotential. After eliminating the auxiliary fields F 's, and with the use of we have (4 . 3). Finally, let us work out explicit forms of the D-term potential V D . Collecting terms containing the auxiliary fields Y 's, we obtain where (4 . 19) are Hermitian matrices, with the decomposition We observe, that in the potential (4 . 17), Y I i do not couple to the rest of auxiliary fields and can be easily eliminated. Having this done, we collect the U(1) i and SU(N i ) V terms into a matrix form labeled by α, β = 0 1 , Eliminating remaining auxiliary fields we obtain: Matrix G = (G αβ ) is explicitly given by with the inverse where we abbreviated:g 2 =ĝ 2 1 m 2 2 +ĝ 2 2 m 2 1 , (4 . 26) Positivity of Potential The F-term potential (4 . 3) is manifestly positive. The D-term potential (4 . 21) is positive definite under certain conditions. To find the condition we shall decompose (4 . 21) to: It is clear that the V 1D is positive definite by itself. Therefore we can only focus on V 2D , which is positive if and only if G is positive definite. It is easy to recognize that positivity of G is manifest once the adjoint scalars vanish Φ i = 0. Nevertheless, it is instructive and assuring if we consider the potential as well as the BPS equations keeping the adjoint scalars Φ i nonzero. To ascertain positivity of G we need to compute its eigenvalues. This is most easily done by looking at its determinant (We leave the derivation of this result to the Appendix C): (4 . 33) Requiring det G > 0, we have In Appendix C we show that this condition is both necessary and sufficient to ensure positivity of matrix G in Eq.(4 . 24). BPS equations Let us denote the codimension of the domain wall as y. Since we assume Lorentz invariance for other dimensions, we obtain gauge field to vanish for component other than y. The energy density H for domain walls is given by where color-flavor indices α, β span all values as in Eq.(4 . 4) and we have incorporated color sector α = I 1 , I 2 into the definition of matrix G for brevity. Accordingly, we have incorporated the definition, (r − c) I i = r I i . Since there is no mixing of color sector with the rest, the inverse is calculated trivially and non-color part remains the same as in (4.1). Now we observe that the mixing due to the cubic prepotential occurs only in the kinetic term and potential of the vector multiplets. Moreover, they appear as G and G −1 respectively. Therefore the cross term coming out of the Bogomol'nyi completion has no dependence on the metric G. This fact implies that the cancellation of cross terms to give topological charge goes through unaffected by the mixing of the vector multiplets. More explicitly, we obtain the Bogomol'nyi completion as The last term gives the usual Bogomol'nyi bound and becomes the topological charge. The line before that is the total derivative which give vanishing contribution for an infinite line −∞ < y < ∞. More explicitly, BPS equations for H's andH's of hypermultiplets arẽ We can easily solve the BPS equation for hypermultiplets, by using the moduli matrix approach. We define S ic , S iF and ψ i as where S ic , S iF ∈ SL(N i , C). The hypermultiplet fieldsH iL andH iR do not contribute to domain wall solution and they are therefore vanishing. We write down (4 . 42)-(4 . 44) in terms of the gauge invariant fields The adjoint scalar fields of the vector multiplets are given by Also, we have (4 . 53) 28 BPS equations for vector multiplets (4 . 42)-(4 . 44) can be now rewritten as the following master equations: (4 . 56) Here we have used a notation Irrespective of the possible additional moduli, we can demonstrate that the BPS equations admit the coincident wall solution. Since the hypermultiplet parts are already solved as in (4 . 47)-(4 . 48), our main task is to solve the master equations (4 . 54)-(4 . 56) associated to the vector multiplet. In order to solve them explicitly, we take strong gauge coupling limit g i , g i → ∞, where the master equations give just the algebraic constraints for Ω ic , Ω iF and η i . In principle, they can be solved algebraically. Furthermore, Eq.(4 . 34) with the limit g i → ∞ tells us that positivity is maintained only if Φ i vanishes. In the following we will, therefore, consider a special point in the solution space where Φ i = 0, i = 1, 2 (4 . 58) which implies from Eq.(4 . 51) that Ω iF are constant matrices. Then the differential equations (4 . 55)-(4 . 56) reduce to the set of algebraic equations: Notice, that for both sectors i = 1, 2 these equations are the same and do not couple to each other. We can, therefore, focus our discussion only on one sector, since all results are equivalent in both of them. So in the remaining discussion we will drop the index i from all fields. Now we consider moduli matrix for the coincident walls corresponding to the most symmetric point of the moduli space. Note that in this solution we restore a moduli parameter y 0 corresponding to the position of the coincident wall. A similar construction of domain wall solution works for the second sector (i = 2), besides the first sector (i = 1) given above. Let us note that the field-dependent gauge coupling function similarly to (3 . 33) is automatically obtained as a bosonic part of the Lagrangian specified by the cubic prepotential in Eq.(4 . 11), Restoring the index i = 1, 2 for both of the domain wall sectors, and by using (4 . 65) with (4 . 50), we finally conclude that the appropriate profile of the field-dependent gauge coupling function Σ 0 1 /m 1 − Σ 0 2 /m 2 , similarly to (3 . 33) is achieved. When we make (a part of) the global flavor symmetry as a local gauge symmetry, we can have several options. Since the first flavor group SU(N 1 ) is generally different from the second flavor group SU(N 2 ), we can naturally introduce two different gauge fields for the i = 1 and 2. This option leads to two decoupled sectors in the low-energy effective Lagrangian, which can only be coupled by higher derivative terms induced by massive modes. Another interesting option is to introduce a gauge field only for the diagonal subgroup of isomorphic subgroups of two different flavor groups, such as SU(Ñ ) ∈ SU(N 1 ), SU(Ñ ) ∈ SU(N 2 ) withÑ ≤ N 1 , N 2 . This option is interesting in the sense that the massless gauge field exchange will communicate between two domain wall sectors. We hope to come back to these issues in near future. Let us make a few comments. First we have shown that the chiral model analyzed in section 3 can be extended to a supersymmetric gauge theory with eight supercharges and that the field-dependent gauge coupling function which is a clue for localization is naturally explained by taking the cubic prepotential. Second, there may be more moduli not contained in to (H 0 L , H 0 R ), which require further studies. Third, here we have presented a solution at a special point Φ = 0. It would be interesting to consider the case or Φ = 0, but in this case, we need to take a finite gauge coupling limit, on which we will investigate in future work. §5. Conclusions and discussion In this paper we have successfully localized both massless non-Abelian gauge fields and massless matter fields in non-trivial representation of the gauge group. We first considered a (4+1)-dimensional U(N) gauge theory with additional SU(N) L × SU(N) R × U(1) A flavor symmetry. We introduced the flavor gauge field for the diagonal flavor group SU(N) L+R , which is unbroken in the coincident wall background. The flavor gauge fields are localized on the wall by introducing the scalar-field-dependent gauge coupling function. Then we studied the low-energy effective Lagrangian and showed that massless localized matter fields interact minimally with localized SU(N) L+R gauge field as adjoint representations. Moreover, full nonlinear interaction between the moduli containing up to the second derivatives, was worked 31 out. The field-dependent gauge coupling function is naturally realized in supersymmetric gauge theories using the so-called prepotential. For this reason, we also explored bosonic part of N = 1 supersymmetric extension of our model. Main result of this paper is the effective Lagrangian (3 . 49). The moduli field U appearing in the effective theory, is a chiral N × N matrix field like a pion, since it is a NG boson of spontaneously broken chiral symmetry. Other moduli in (3 . 49), denoted by N ×N Hermitian matrixx, has the physical meaning of positions of N domain walls as its diagonal elements. We argued that the fluctuations of moduli fieldx, can develop VEV corresponding to splitting of walls, and the Higgs mechanism will occur as a result. Namely, the flavor gauge fields get masses by eating the non-Abelian cloud. Therefore, in this model, Higgs mechanism has a geometrical origin like low energy effective theories on D-branes in superstring theory. Amongst the possible future investigations, we would like to study non-coincident solution to further clarify this geometrical Higgs mechanism. We have noticed that our effective moduli fields resemble the pion in QCD. Similar attempts have been quite successful using D-branes. 25) We believe that our methods can provide more insight in various aspects of low-energy hadron physics. We plan to explore this direction more fully in subsequent studies. In the discussion of supersymmetric extension of our model in section 4, we employed a general setup where both sectors possessed their own domain wall solution, preserving the same half of the supercharges. But another alternative approach is also possible. We can consider a model, where different halves of supercharges are preserved at each sector (BPS and anti-BPS walls), and the SUSY is completely broken in the system as a whole. It has been proposed that the coexistence of BPS and anti-BPS walls gives the supersymmetry breaking in a controlled manner. 26) In our present case, BPS and anti-BPS sectors interact only weakly. If we choose flavor gauge field for each sector separately, we have only higher derivative interactions induced by massive modes. If we choose the diagonal subgroup of (subgroups of) each sector as flavor gauge group, we have a more interesting possibility of the massless gauge field as a messenger between two sectors. We plan to address this issue elsewhere. In order to construct a realistic brane-world scenario with the SM fields on the domain wall, we need the localization of fields in the fundamental representation of the gauge group. This is still an open problem and one of the priorities of our future investigations. In particular, the SM contains chiral fermions. Localization of chiral fermions is a particularly challenging problem. Anomaly associated with the chiral fermion is also an interesting issue to be addressed. We would also like to clarify these problems in subsequent studies. Two more issues remain to be addressed. First is the question of sign of gauge kinetic term. In our present model, the positivity of the gauge coupling function is assured only when positions of walls are properly ordered (see Eq.(3 . 48)), namely only in a region of the moduli space, More economical models such as given in Ref. 12) may not have such moduli and, therefore, the effective gauge coupling may be always positive. And lastly, as discussed in section 4, we have not succeeded in exhausting all moduli in the supersymmetric extension of our model. We would also like to investigate these aspects in the future. Acknowledgements This work is supported in part by Japan Society for the Promotion of Science ( Here we consider the domain wall solutions in the gauged massive CP 1 sigma model. The model is obtained as the strong gauge coupling limit of a model similar to that we have studied in section 2.2. Namely, we start with the Lagrangian which has U(1) × U(1) gauge symmetry with two flavors where H = (H L , H R ). The covariant derivative is given bỹ The mass matrix is chosen M = diag(m, −m) as before. We next take the strong gauge coupling limit g → ∞ of only one of the gauge coupling which results in the non-linear sigma model coupled to the other gauge field with the finite gauge coupling e. In the limit the gauge field w M and the neutral scalar field σ become Lagrange multipliers. After solving their equations of motion, we have where we have introduced the covariant derivativê Plugging these into the original Lagrangian at g → ∞, we get the gauged massive CP 1 sigma model with the projection operator As before, let us rewrite this Lagrangian with respect to the inhomogeneous coordinate Then the charge matrix should be chosen as which leads to a natural expression that the complex scalar field φ has the U(1) charge 1 for the gauge field a M : Plugging these into Eq.(A . 6), we finally get the Lagrangian Let us next consider a domain wall solution in this model. We assume all the fields depend on only the extra-dimensional coordinate y. Then the four dimensional components of the Maxwell equation can be immediately solved by a µ = 0, µ = 0, 1, 2, 3. (A . 14) The fifth component is Now the Hamiltonian reduces to the following form (A . 16) Thus the reduced Hamiltonian is minimized when the following first order equation is satisfied Lagrangian is given by where we have promoted the moduli parameter y 0 , α to the fields y 0 (x µ ), α(x µ ) on the wall, and we have introduced the covariant derivativê where α is the function of the (3+1)-dimensional coordinate x µ . Assuming a µ to be yindependent (zero mode), we finally obtain Thus we find that the gauge field a µ (x) absorbs the scalar field α(x) to become massive via the Higgs mechanism. Since that the U(1) gauge field a µ is massive in the effective Lagrangian, we have to integrate it out according to the spirit of the low energy effective theory. Appendix B Effective Lagrangian on the domain wall In this appendix we derive our main result (3 . 41) of the effective Lagrangian for the gauged Chiral model introduced in §3. B.1. Compact form of gauged nonlinear model Starting from the Lagrangian using the Einstein summation convention for a = {L, R} we first eliminate the gauge fields W µ to obtain a simple expression for gauged nonlinear sigma model. Gauge fields W µ are given by equations of motion as The effective Lagrangian (B . 1) should also contain kinetic term for gauge field A µ , but we will not explicitly write it here, for brevity. Eq. (B . 1) can be further simplified by using the following identities After some algebra we find: Plugging above expression back into the (B . 1) we arrive at: (B . 13) In the following we would like to carry out the integration over the extra-dimensional coordinate y. This can be done in two steps. First, we must factorize all quantities depending on y (or onŷ) to one term inside the trace, effectively reducing our problem to fit the following form: where M is some matrix, independent of y and f some function. In the second step we diagonalizex:x = P −1 diag(λ 1 , . . . , λ N )P , and use the fact that f (P −1ŷ P ) = P −1 f (ŷ)P . This transformation leads to For every term in the sum we can perform substitutionỹ = my − λ i . The key observation is that in each term the integration will be the same and independent on a particular value of λ i . Thus we arrive at an identity It appears as if we just made a substitutionŷ =ỹ1 N . This is possible, of course, only thanks to the diagonalization trick and properties of the trace. In the subsequent subsections, however, we will refer to this procedure as if it is just a 'substitution', for brevity. Let us decompose the effective Lagrangian (B . 13) into three pieces L eff = Tx + T U + T mixed (B . 16) and see the outlined procedure for each term. B.2.1. Kinetic term for U First, let us concentrate only on terms containing double derivatives of U, which we denote T U : where we have used the fact that inside commutator it is possible to freely interchange e −ŷ cosh(ŷ) → − eŷ cosh(ŷ) , since the difference is just a constant matrix. In this way we made T U manifestly invariant under exchangeŷ → −ŷ. Since in the first factor of T U allŷ-dependent quantities are on the right side, we can, according to our previous discussion, make use of the identity (B . 15) and carry out the integration: For the second term, however, we first use the identity: . Now allŷ-dependent factors are standing on the right and we can formally exchangeŷ →ỹ. The summation can be carried out to get: The formula for T U now reads: Since we started with T U invariant under the transformationŷ → −ŷ, we should take only even part of the above formula (under exchange Lx → −Lx) as the final result: Now we can carry out the integration using primitive function dy cosh(y − α) cosh(y) = 1 sinh(α) ln 1 1 − tanh(α) tanh(y) . Therefore we obtain the result to all orders inx as: we can easily read off coefficients of terms beyond the leading one. For example, the first three terms reads: With use of the identity (B . 17) and one can prove the following: We can use this result to factorize allŷ-dependent quantities to the right and make the substitutionŷ =ỹ1 N : . Now we are free to perform summation and integration to obtain: we can easily read off coefficients of terms beyond the leading order in the series expansion: Kinetic term forx is given by We are going to need the identity With the aid of this we arrive at where we again employed diagonalization trick and identity (B . 15). Let us carry out the summation and the integration to obtain: leading to the power series: Putting all pieces together as L eff = Tx + T U + T mixed , we obtain our final result (3 . 41).
12,469.8
2012-08-30T00:00:00.000
[ "Physics" ]
Electrostatic Interaction of Phytochromobilin Synthase and Ferredoxin for Biosynthesis of Phytochrome Chromophore* In plants, phytochromobilin synthase (HY2) synthesize the open chain tetrapyrrole chromophore for light-sensing phytochromes. It catalyzes the double bond reduction of a heme-derived tetrapyrrole intermediate biliverdin IXα (BV) at the A-ring diene system. HY2 is a member of ferredoxin-dependent bilin reductases (FDBRs), which require ferredoxins (Fds) as the electron donors for double bond reductions. In this study, we investigated the interaction mechanism of FDBRs and Fds by using HY2 and Fd from Arabidopsis thaliana as model proteins. We found that one of the six Arabidopsis Fds, AtFd2, was the preferred electron donor for HY2. HY2 and AtFd2 formed a heterodimeric complex that was stabilized by chemical cross-linking. Surface-charged residues on HY2 and AtFd2 were important in the protein-protein interaction as well as BV reduction activity of HY2. These surface residues are close to the iron-sulfur center of Fd and the HY2 active site, implying that the interaction promotes direct electron transfer from the Fd to HY2-bound BV. In addition, the C12 propionate group of BV is important for HY2-catalyzed BV reduction. A possible role for this functional group is to mediate the electron transfer by interacting directly with AtFd2. Together, our biochemical data suggest a docking mechanism for HY2:BV and AtFd2. Biochemical and structural studies have revealed several universal properties of FDBRs. First, transient radical intermediates are present in catalytic reactions of FDBRs. Two organic radical intermediates have been detected in the two double bond reduction steps of PcyA (5). Also, we have recently reported that an organic radical species is involved in HY2catalyzed A-ring 2,3,3 1 ,3 2 -diene reduction (6). A radical mechanism for this HY2 reaction has been proposed (Fig. 1). These data strongly support the existence of a universal radical mechanism for FDBR-catalyzed BV reduction. Second, structures of several FDBRs and mutagenesis data indicate FDBRs share a similar ␣-␤-␣ sandwich fold, but their double bond specificities are decided by the location of different proton-donating residues and waters in their active sites (7)(8)(9). Third, all FDBRs require the donation of electrons from reduced ferredoxins (Fds) to reduce double bonds (10 -12). The plant-type [2Fe-2S] Fds have been shown to be the major electron donor. However, there is only limited information on how FDBRs interact with Fds for electron transfer. In photosynthetic organisms, Fds function not only in electron transfer system of photosynthesis but also in the redox reactions of several oxidoreductases, such as sulfite reductase, nitrite reductase, glutamate synthase, and ferredoxin:thioredoxin reductase. Studies of these Fd-dependent enzymes indicate that they interact with their Fd partners mainly through electrostatic interactions (13)(14)(15)(16)(17). In Arabidopsis, the P⌽B synthase HY2 has been shown to reside in the plastid, where it meets the substrate BV and electron donor Fd (4). Six Fd isoforms have been identified in Arabidopsis. Four of them are plastid-type [2Fe-2S] Fds, and the remaining two are Fd-like proteins. Two of the four plastid-type Fds, AtFd1, and AtFd2, were predicted to be typical leaf-type Fds (18). Although these Fds share high similarity, their functions could be diverse because of different expression patterns, abundance, subcellular localizations, redox potentials, and specificities to redox enzymes. It has been proposed that PcyA and PebS interact with ferredoxins through the basic patches on the surface near the substrate binding sites (7)(8)(9). However, there is no experimental evidence to support this hypothesis. The HY2 homology model also shows similar characteristics to the PcyA and PebS structures (supplemental Fig. S1) (6). Therefore, we approached the study of the interaction of HY2 and Fd through a combination of enzyme assay, site-directed mutagenesis, and chemical cross-linking. Our results suggest that HY2 utilizes AtFd2 as the main electron donor. HY2 and Fd interact in a 1:1 ratio, mainly through charged residues on each side. Similar heterodimerization is also present in other FDBRs like PcyA, PebA, PebB, and PebS. We also found that the C12 propionate group of BV is important for HY2-catalyzed BV reduction. These findings then allowed us to propose a docking mechanism for HY2 and AtFd2. Protein Expression, Purification, and Site-directed Mutagenesis-HY2 and site-directed mutant proteins were expressed and purified as described previously (6). For expressing PcyA from Synechocystis sp. PCC6803 (PcyA_SYNY3), PebA and PebB from Synechococcus sp. WH8020 (PebA_SYNPY and PebB_SYNPY), and PebS from cyanophage myovirus P-SSM2, coding regions of these bilin reductases were subcloned into pTYB12 expression vectors (New England Biolabs). The proteins were expressed and purified with the similar method for HY2. For expressing recombinant ferredoxins from Arabidopsis thaliana, the six AtFd coding regions without the predicted N-terminal transit peptide (or targeting signal) were subcloned into the Escherichia coli (E. coli) expression vector pET42b (Novagen) to construct pET42b-mAtFds. The stop codons of each mAtFd coding sequence were retained to produce tag-free proteins. Cultures of E. coli strain BL21 containing pET42b-mAtFds were generated to express tag-free AtFds. The bacterial cells were grown at 37°C in 500-ml batches of NZCYM medium (5 g/liter NaCl, 5 g/liter yeast extract, 1 g/liter casamino acid, and 2 g/liter MgSO 4 ⅐7H 2 O, pH 7.5) containing Fe (NH 4 ) citrate (12 mg/liter) and ampicillin (100 g/ml) to an A 600 of 0.6 -1.0. Cultures were induced by the addition of 0.1 mM isopropyl thio-␤-galactoside and incubated overnight at 16°C, and subsequently harvested by centrifugation. The tagfree AtFds were purified first with a HiTrap DEAE FF ion-exchange column according to instructions supplied by the manufacturer (GE Healthcare), and then further purified with a Superdex 75 size exclusion column pre-equilibrated with TK buffer (25 mM TES-KOH pH 8.5 and 100 mM KCl). All sitedirected mutants were generated in pET42b-mAtFd2 using the QuikChange site-directed mutagenesis kit (Stratagene). Mutant mAtFd2 proteins were expressed and purified with a method similar to wild type. Recombinant Synechococcus sp. PCC7002 Fd (SynFd) was purified as described previously (20). Purified Fds were dialyzed against TK buffer (25 mM TES-KOH, pH 8.5 and 100 mM KCl). The protein concentration of Fds solutions was determined by absorption at 420 nm (using a molar extinction coefficient of 9.7 mM Ϫ1 ⅐cm Ϫ1 ) and BCA methods. All purified proteins were flash-frozen in liquid nitrogen and stored at Ϫ80°C prior to use. Enzyme Assay and Spectroscopic Analysis-Steady-state bilin reductase assays were performed similarly to those described previously with minor modifications (3). Steady-state assay conditions consisted of 25 mM TES-KOH, pH 8.5, 100 mM KCl, 0.025 units/ml FNR, 1 M (determined by spectroscopy) wildtype or mutant Fd, 100 M bovine serum albumin, 5 M BV, 0.1 M wild-type, or mutant HY2, 25 units/ml glucose oxidase, 100 mM glucose, and 25 units/ml catalase. Excess NADPH (i.e. 30 M) was added to initiate catalysis. Reaction mixtures were incubated at 30°C for 30 min (determined to be within the linear range for mHY2 activity). Crude bilins were extracted with C18 Sep-Pak light cartridge and subsequently evaporated to dryness using a SpeedVac concentrator. HPLC analysis was performed as described previously (5). Ultraviolet-visible absorption spectroscopic measurements were performed as described previously (6). For measuring the UV-visible spectra of BV complexes of wild-type HY2 and sitedirected mutants, equimolar solutions (10 M) of BV, and enzymes were mixed and incubated at 25°C for 5 min prior to the measurement. To compare the efficiency of BV binding between wild-type HY2 and site-directed mutants, the long-toshort wavelength absorption ratios (A 650 /A 380 ) of enzyme-BV complexes were measured. Ratios for free and wild-type HY2bound BV were set as 0 and 100%, respectively. Chemical Cross-linking and Protein Electrophoresis-To label the Fd with EDC, 200 M (determined by BCA method) of Fd proteins in 50 mM phosphate buffer at pH 6.5 were incubated with 3.2 mM EDC and 6.4 mM Sulfo-NHS for 2 min at room temperature. An equal volume of 20 M mHY2:BV complexes in 50 mM phosphate buffer at pH 8.0 were immediately mixed with EDC-labeled Fd and incubated for 10 min at room temperature. The cross-linking reaction was stopped either by adding SDS-PAGE sample buffer or 50 mM Tris-HCl, pH 8.0. Protein samples with sample buffer were boiled at 95°C for 10 min and analyzed by SDS-PAGE on 4 to 12% NuPAGE gels (Invitrogen). Gels were stained with Coomassie Blue for visualizing the protein bands. The intensities of protein bands were quantified using the program ImageJ (NIH). Structure Simulation and Modeling-The homology model of HY2:BV complex was generated as described previously (6). The structure of maize Fd extracted from the Fd:FNR complex was used as a template to generate the homology model of Arabidopsis Fd2 (AtFd2) (16). Residues 53-148 of AtFd2 were used for modeling in the Discovery Studio 2.0 program package (Accelrys). The resulting model for AtFd2 was manually edited in Coot to lessen steric clashes and then subjected to energy minimization in Discovery Studio 2.0 (21). To generate the docking model for HY2 and mAtFd2, the ZDOCK program in Discovery Studio 2.0 program package was used. A total of 54,000 docked poses were generated. The top 100 poses were categorized into 16 clusters, and the top pose in cluster 3 (ranked 18 out of 54,000 docked poses), which fitted with our mutagenesis data were selected for further analysis. RESULTS AtFd2 Is the Major Reductant for HY2-We first checked which Arabidopsis Fd is the preferred electron donor for HY2. In all, there are six Fd genes in A. thaliana. All Fd proteins were predicted to have a transit peptide or targeting signal in their N-terminal regions. Therefore, mature proteins of all six Fds were expressed, purified, and tested for their ability to function as the electron donor of HY2. All purified AtFds showed typical absorption of plant-type Fds in oxidized form (data not shown). Absorption at 420 nm from iron-sulfur clusters was used to quantify the concentration of Fd for bilin reductase assay. Saturated amounts of Fds and FNR were included in the reaction. The result showed that AtFd2 transfers electrons for HY2 reaction more efficiently than other AtFds (Table 1). Although both AtFd1 and AtFd2 have been shown to be the leaf-type and plastid-localized Fds with almost 90% sequence identity and similar redox potential, AtFd1 was still less effective at supporting HY2 catalysis. Based on the expression level of different AtFds previously reported by Hanke et al. (18), we believe that AtFd2 is the major electron donor for HY2 in Arabidopsis. Homology Models of AtFd2 and HY2 Reveal the Possible Interaction Mechanism-For successfully transferring electrons to HY2-bound BV, AtFd2 needs to dock on the surface of protein. Structures of both PcyA and PebS show a basic area composed of conserved, positive-charged residues on the surface of the BV-binding pocket (supplemental Fig. S1) (7)(8)(9). A similar characteristic can also be observed in the HY2 homology model we previously generated (Fig. 2, top panel) (6). This area is a potential docking site for the acidic Fd protein. We also generated a homology model of AtFd2 according to a multiple sequence alignment of Fds and the structure of mature maize Fd, which is 77% identical to AtFd2 in primary sequence (supplemental Fig. S2B) (16). Interestingly, as shown in the bottom panel of Fig. 2, a surface area on AtFd2 contributed by conserved acidic residues fits nicely with the basic area of HY2. Corresponding residues of the acidic area in Fds from other plant species have also been predicted to be important for charge interaction with FNR (13,16). The electrostatic interaction could potentially bring the iron-sulfur cluster and BV close enough together for electron transfer. Based on these observations, we then mapped the interaction mechanism between AtFd2 and HY2 in more detail. Dimerization of Fds and FDBRs Indicates the Interaction Involves Salt Bridges-The interaction of Fd and HY2 may involve intermolecular salt bridges in the binding interface, which could be formed between adjacent carboxyl and amino residues. We therefore tested this hypothesis by the chemical cross-linking technique using EDC, a zero-length cross-linker that can generate an amide bond from two adjacent carboxyl and amine groups. A similar method has been successfully used in generating the heterodimer of Fd and FNR (22,23). It also has been used for identifying the interactions of ferredoxins with nitrite reductase and glutamate synthase in Chlamydomonas (24,25). To enhance the cross-linking efficiency, a stabilizing compound for the EDC reaction, Sulfo-NHS, was included in this study. In a time course experiment, a protein band with a molecular mass of about 45 kDa was observed after cross-linking AtFd2 and HY2 for 5 min (Fig. 3, lanes 2 and 8). The size of the protein complex is in good agreement with the calculated molecular mass of an AtFd2:HY2 heterodimer (43,485 Da). The native molecular mass of AtFd2:HY2 complex was also deter-mined by size-exclusion chromatography (supplemental Fig. S3). A relative molecular mass of 40.9 kDa was deduced, supporting our size prediction from SDS-PAGE. The heterodimerization was further confirmed by mass spectrometrybased protein identification (data not shown). Furthermore, crosslinking of AtFd2 and HY2 only produced heterodimers following the time. The addition of BV resulted in condensed protein bands without generating more heterodimer complexes, suggesting that the bound BV increases the specificity of Fd docking but not the affinity (Fig. 3, lanes 8 -13). These results strongly suggest that the interaction between HY2 and Fd at least involves salt bridges between carboxyl and amino residues on each side. We then tested the heterodimerization of HY2 with six AtFds. As shown in Fig. 4A, all AtFds can interact with HY2 in a 1:1 ratio. The interaction of Fd and HY2 is specific because replacing Fd with Arabidopsis heme oxygenase 1 (HO1) did not generate any crosslinked product. More heterodimer formation was observed in the reactions of AtFd1 and AtFd2 and less in AtFd3, AtFd4, AtFdl-1, and AtFdl-2. The ability of each AtFd to interact with HY2 correlated with their activity in the HY2 reaction (Table 1), indicating that the affinity of AtFds and HY2 is the determinant for BV reduction activity. The leaf-type Arabidopsis Fds, especially AtFd2, are the preferred redox partner for HY2. A recent study by Okada (26) reported that cyanobacterial PcyA and Fd formed a complex in a 1:2 ratio. A long space-arm, amine-reactive cross-linker bis-(sulfosuccinimidyl) suberate (BS 3 ) was used in his experiments. Because our use of the zero space-arm cross-linker EDC in the Fd-HY2 reaction gave a result clearly different from his study, we therefore performed the EDC cross-linking experiment on cyanobacterial FDBRs and Fd and found that all bilin reductases including cyanophage PebS formed heterodimers with Fds. (Fig. 4B). The affinity of PebA_SYNPY for SynFd was not only higher than that of PcyA_SYNY3, but also PebB from the same species (Fig. 4B, lanes 7-9). More interestingly, the cyanophage PebS also interacted strongly with SynFd (Fig. 4B, lane 10). An unknown crosslinked complex was observed in the PebS reaction. These data strongly support the hypothesis that FDBRs interact with Fds in a 1:1 ratio and also suggest the formation of salt bridges is universal for the interactions between FDBRs and their partner Fds. Conserved Charged Residues on the Surface of HY2 and AtFd2 Are Involved in Protein-Protein Interaction-We further investigated details of the interaction mechanism using HY2 and AtFd2 as model proteins. The electrostatic interaction of HY2 and AtFd2 requires charged residues on each side. HY2 resi- Electrostatic Interaction of HY2 and AtFd2 FEBRUARY 12, 2010 • VOLUME 285 • NUMBER 7 dues in the predicted Fd docking site were first substituted by neutral residues. All the mutant proteins were expressed, purified, and measured for steady-state bilin reductase activity and the ability of BV binding ( Table 2). All mutants still retained the ability to bind BV; however, most of the mutants showed decreased activity relative to wild-type protein. Mutation of His-259 had no effect on activity, suggesting this positively charged residue is not involved in Fd interaction. Most of the selected residues are conserved in the HY2 family except Glu-110 (supplemental Fig. S2B). Interestingly, mutation of this residue increased the HY2 activity 3-fold. So far, this is the first point mutation identified in FDBRs, which can improve enzymatic activity. Several conserved residues in the acidic surface area or close to the iron-sulfur cluster of AtFd2 were also selected for sitedirected mutagenesis. The ability of AtFd2 mutant proteins to function as the electron donor for HY2 was analyzed by bilin reductase assay. Among the residues selected, mutations of Glu-81, Arg-92, and Asp-112 residues decreased AtFd2 activity ( Table 2). The corresponding residues in maize Fd have all been shown to form intermolecular salt bridges with FNR (16). This result implies that the docking mechanism of AtFd2 to HY2 protein may be similar to that of Fd-FNR binding. The dimerization of AtFd2 and HY2 can also be analyzed by the EDC cross-linking method. We therefore compared the dimerization ability of point mutants of HY2 and AtFd2. Fig. 5A shows the amount of heterodimers produced from wild-type AtFd2 and HY2 mutant proteins. Mutants with decreased reductase activity all generated fewer heterodimers after crosslinking ( Table 2). The correlation of enzymatic activity and heterodimer formation reveals that these residues are critical in protein-protein interaction. A double mutation of the Lys-263 and the Arg-264 residues on HY2 disrupted heterodimer formation more than any single mutation. This result suggests that the electrostatic interactions of the two proteins are contributed by the combination of several positively charged residues on HY2. The protein-protein interaction was also partially eliminated in those AtFd2 single mutants showing decreased activity ( Fig. 5B and Table 2). Interestingly, the R92Q mutant retained partial ability (75% of wild type) in interacting with HY2, but almost completely lost its activity. We propose that there may be alternative functions for this positively charged residue on AtFd2, such as mediating electron transfer from the iron-sulfur center to an HY2-bound BV molecule. The C12 Propionate Group of BV Is Important for HY2-catalyzed BV Reduction-The homology model and mutagenesis data suggest that electrostatic interactions bring the iron-sulfur cluster of AtFd2 and HY2-bound BV molecule into close prox- Relative activities of HY2 and AtFd2 wild type and site-directed mutants Steady state bilin reductase assays were performed as described under "Experimental Procedures." Integrated peak areas of 3Z/3E-P⌽B reaction products from HPLC profiles were determined as a percentage of wild-type HY2 and AtFd2 activity. To compare the efficiency of BV binding between wild type and site-directed mutants, the long-to-short wavelength absorption ratios of enzyme-BV complexes were measured. Ratios for free and wild-type HY2-bound BV were set as 0 and 100%, respectively. Proteins Relative imity for electron transfer. It is possible that electrons are directly accepted by the BV molecule from Fd without traveling through other pathways. From the known structures of FDBRs and our HY2 model, one possible entry site for electrons is the solvent-exposed propionate groups of protein-bound BV (supplemental Fig. S1) (7,9). To test this hypothesis, we used two BV analogs, BV C8 monoamide (BV-8amide), and BV C12 monoamide (BV-12amide), as the substrates for HY2 activity assay (Fig. 6). In these analogs, the ionizable carboxyl group on either C8 or C12 propionate side chain of BV was converted into a neutral amide group. Both BV analogs retained similar ability to bind HY2 (data not shown). We then replaced BV with BV-8amide and BV-12amide, respectively, as substrates of HY2 in bilin reductase activity assay. HY2 catalyzed the reduction of BV-8amide normally compared with the BV reaction (Fig. 6, top and middle panels). However, BV-12amide was a poor substrate for HY2 with less than 35% being reduced (Fig. 6, bottom panel). The un-reacted BV and its analogs in reaction mixtures were nonspecifically reduced by reduced Fds, therefore, could not be found in HPLC profiles. The result indicates the C12 propionate group of BV is functionally important in the double bond reduction step catalyzed by HY2. Ionization of the carboxyl group on the C12 propionate side chain may play an important role in the electron transfer reaction. DISCUSSION FDBRs are a family of enzymes that catalyze the reduction of BV via diverse double bond specificities. For reducing double bonds, FDBRs require electrons to be transferred from Fd proteins. This study examined the interaction mechanism of FDBRs and Fds by using HY2 and Fd from A. thaliana as model proteins. We found that AtFd2 is the preferred electron donor for HY2. HY2 and AtFd2 formed a heterodimer after chemical cross-linking. Similar phenomena were also identified in cyanobacterial and cyanophage FDBRs. Several charged residues on HY2 and AtFd2 were important in both heterodimerization and BV reduction activity. These results suggest that electrostatic interactions between FDBRs and their redox partners are required for enzymatic activities. Interestingly, these residues are all located on the surface area close to the Fd redox center or HY2 active site, suggesting that the interaction may promote direct electron transfer from Fd to HY2-bound BV molecule. We also found the C12 propionate group of BV is important for HY2-catalyzed BV reduction. A possible role for this functional group is to mediate the electron transfer by interacting directly with AtFd2. A combination of all our biochemical data suggested a docking model for HY2:BV and AtFd2. The mechanistic implications of this suggested proteinprotein interaction are discussed below. Mechanistic Prediction of AtFd2 and HY2 Interaction-We generated the docking structure of HY2:BV-AtFd2 complex using the homology models of HY2 and AtFd2 for simulation (see "Experimental Procedures"). The important surfacecharged residues we found on each protein were used to examine the docking models. Fig. 7A shows the best-fit docking pose we obtained so far. In the model, AtFd2 contacts HY2 on the surface of the BV-binding pocket. As shown in Fig. 7B, the interaction is stabilized by several salt bridges between the two molecules. Most of the important surface-charged residues we identified in this study are components of these intermolecular salt bridges, such as the residue pairs Arg-263-Glu-81, Arg-200 -Asp-112, and Lys-255-Asp-73 from HY2 and AtFd2, respectively. Several residues on HY2 were shown to be important for enzymatic activity and interaction with AtFd2, but were not found to be directly involved in the electrostatic interaction for the docking. It is possible that these residues function after the conformations of HY2 and AtFd2 are changed upon binding, further enhancing the affinity of the two proteins. Such a conformation change may also induce the breaking of intramolecular bonding and the formation of intermolecular bonding. One possible candidate is the Arg-92 residue on AtFd2. The corresponding residue to Arg-92 in spinach Fd forms an intramolecular salt bridge with a conserved glutamate residue (Glu-81 on AtFd2) (27). In the structure of the maize Fd:FNR complex, both Fd residues form intermolecular salt bridges with FNR upon binding (16). In our docking model, the Glu-81 residue on Fd is paired with Arg-263 of HY2 upon inter- action (Fig. 7B). The side chain of Arg-92 then points into the active site and potentially interacts with BV. Together with our activity data from BV analogs, we hypothesize that Arg-92 interacts with the carboxyl group on C12 propionate side chain during Fd docking. This interaction positions the Fd molecule precisely, and facilitates the electron transfer from the ironsulfur center to the HY2-bound BV molecule. Alternatively, the salt bridge could neutralize the charges and allow the electron to be transferred directly through bonding of Arg-92 and the C12 propionate group. Hydrophobic interactions and hydrogen bonding also play important roles in protein-protein interaction. Although we mainly focused on charged residues in this study, an interesting feature we observed from the docking model is the location of nonpolar and hydrogen-bonding residues in the binding interface (supplemental Fig. S4). These residues, from both AtFd2 and HY2, are distributed at the edge of the contact area, forming intermolecular hydrophobic interactions and hydrogen bonding. This characteristic could produce a hydrophobic environment in the central region of docking interface to stabilize the electron transfer reaction. A similar property has also been identified in the contact region of Fd:FNR complex (16). Further experimental testing is required to support this hypothesis. In summary, we propose that Fd binds to HY2 primarily through electrostatic interactions. Binding affinity is further enhanced by hydrophobic interactions and hydrogen bonding in the contact interface. Conformational change of both proteins upon binding induces the formation of intermolecular bonding, which further anchors the Fd. The docked AtFd2 molecule is precisely oriented by the salt bridge between Arg-92 on AtFd2 and the C12 propionate side chain on HY2-bound BV. The electron transfer process mediated by the salt bridge is stabilized by the hydrophobic environment in the central area of the docking interface. The electron is then transferred to the BV backbone, delocalized on the conjugated double bond system, eventually stopping at the double bond reduction site with adjacent proton donors. Fd docking must be transient to allow the next round of electron transfer from another Fd molecule. The Fd:HY2 complex eventually dissociates due to lower affinity induced by conformational changes of both proteins. Conformational changes could result from the protonation of BV and the oxidation of the iron-sulfur center on Fd (5,28). This proposed Fd-HY2 interaction not only can accelerate the electron transfer, but also generate a more hydrophobic environment to stabilize the transient radical intermediate produced during BV reduction. This hypothesis would be better answered by further mutagenesis and protein-protein interaction experiments. Reductant Specificity of FDBRs-FDBRs depend on Fds as the reductant for converting BV into bilin pigments (12). As FDBRs have evolved diverse activities, Fds may also be specialized for individual enzymes. Four Fd genes can be found in the genomes of Synechocystis and Synechococcus. The encoded Fd proteins share high sequence identity (more than 50%) and possibly have similar structural folds. However, we found the Fd from Synechococcus sp. PCC7002 has lower affinity to PcyA from Synechocystis sp. PCC6803 compared with that of Synechococcus FDBRs, indicating Fd recognition by FDBRs is partially species FIGURE 6. HPLC analysis of HY2-catalyzed reduction of BV and its analogs. Steady-state bilin reductase assays were performed as described under "Experimental Procedures." 5 M BV, BV-8amide, and BV-12amide were used as the substrates of HY2. Bilin pigments were extracted after reactions were stopped, and analyzed by reverse-phase HPLC. Elution positions of BV, 3Z-P⌽B, 3E-P⌽B, 3Z-P⌽B-8amide, 3E-P⌽B-8amide, 3Z-P⌽B-12amide, and 3E-P⌽B-12amide are indicated by arrows. specific (Fig. 4B). Furthermore, even though from the same species, PebA and PebB also have different affinities to their Fd partner from the relative species. This result reveals that even Fds in single species have become optimized for specific Fd-dependent enzymes during evolution. On the other hand, it is also likely that FDBRs are specialized for recognizing conserved Fds. PebS was recently identified in the genome of the cyanophage myovirus P-SSM2, which infects Prochlorococcus as the host (2). PebS together with its redoxpartner PetF, another plant-type [2Fe-2S] Fd encoded by the P-SSM2 genome, are expressed in the early induction stage during cyanophage infection. It has been proposed that expression of the single enzyme PebS replaces the two separated enzymes PebA and PebB in host cells to improve the phage fitness (2). This could be beneficial to the viral reproduction. One interesting phenomenon we observed is the high affinity of PebS for SynFd (Fig. 4B, lane 10), although SynFd is not the physiological partner for PebS. We propose that early induction of both pebS and petF during infection is the strategy for the virus to ensure an efficient control of host systems in the early stage. However, under high selection pressure, cyanophage PebS is evolved into a protein with high affinity and low specificity for any available redox partners in its hosts. It may thus take advantage of host Fds as electron donors for producing PEB in the late stage of viral infection, eventually altering host systems for the later actions, such as increasing the efficiency of viral replication. It will be interesting to compare binding affinities of phage-encoded and host Fds to PebS. A recent study proposed that PcyA and Fd from Thermosynechococcus elongatus BP-1 interact in a 1:2 ratio (26). The conclusion was based on the molecular weight of the cross-linked PcyA:Fd complex calculated from SDS-PAGE. However, it has been shown that acidic Fds migrate more slowly in SDS-PAGE due to a reduced amount of bound SDS (29), which may lead to miscalculation of the molecular weight of the complex. Also, the long space-arm cross-linker used in the study may induce complex formation from some nonspecific interaction. Our data from the EDC cross-linking used here suggest the formation of heterodimers of Fds and FDBRs. The dimerization is further supported by the data from size-exclusion chromatography (Fig. S3). We believe that interaction in a 1:1 ratio of Fd to FDBR is the physiological condition for FDBR catalysis. We cannot rule out the possibility that there might be alternative mechanisms in different organisms. As we also found in the PebS cross-linking reaction, a high molecular mass complex (ϳ63 kDa) was produced (Fig. 4B, lane 10). Further studies on the interaction mechanism for PebS and its redox partner as well as FDBRs from different organisms are necessary. Functions of Fds in Arabidopsis-Although other protein reductants such as flavodoxins can partially provide reducing power, plant-type [2Fe-2S] Fds are known to be the preferred reductant for FDBRs (3). Many photosynthetic organisms contain more than one copy of Fd genes encoding [2Fe-2S] Fds. There are a total of six Fd isoforms in A. thaliana. Four of them, namely AtFd1, AtFd2, AtFd3, and AtFd4, are predicted to be localized in the plastid. The remaining two Fds, AtFdl-1, and AtFdl-2, are uncharacterized ferredoxin-like proteins. The physical properties and functions of the four plastid-type AtFds have been well characterized (18). AtFd1 and AtFd2 are photosynthetic Fds mainly expressed in the leaves, whereas AtFd3 is a non-photosynthetic Fd localized in the roots. AtFd4 is evolutionarily distant from other plastid-type AtFds, but shares low similarity with root-type Fds. The differential expression of AtFds raises an interesting question: how HY2 receive electrons under non-photosynthetic conditions, for example in the root tissue? HY2 gene is expressed in almost all plant tissues and every developmental stage (Genevestigator). However, HY2 is a very low abundance enzyme in plants (11). It could be that the amount of root-type Fds, like AtFd3, is enough to provide the reducing power for HY2 in roots. Our biochemical data suggest that the photosynthetic AtFds donate electrons for HY2 catalysis more efficiently than nonphotosynthetic Fds. Interestingly, both AtFd1 and AtFd2 have high sequence identity (87% in mature region) and similar redox potential (Ϫ425 and Ϫ433 mV, respectively). They interact with FNR with similar affinity (18). In our case, both AtFd1 and AtFd2 also show similar binding affinity to HY2, but with different activity in electron donation ( Fig. 4A and Table 1). We propose that AtFd2 is the physiological electron donor for HY2 because it shows higher activity and comprises around 90% of the leaf Fds in Arabidopsis under standard growth conditions. We cannot rule out the possibility that AtFd1 may be able to replace AtFd2 function to some degree, although only less than 10% of leaf Fds are AtFd1. It is still not clear that why AtFd1 is less effective at donating electrons under our in vitro assay conditions, as it is almost identical to AtFd2. The few variations on primary sequences of both proteins may cause the difference. Further investigation is being carried out to answer this question. Previous data from reduction of AtFd2 proteins by RNA interference experiments have implied that AtFd2 primarily functions in linear electron flow in photosynthesis (30). Although the concentration of Fd in chloroplasts is relatively high compared with other Fd-dependent enzymes, the majority of AtFd2 reduced by photosystem I possibly interacts with the predominately membrane-bound FNR for electron transfer (31)(32)(33). We also found that AtFd2 has much higher binding affinity to FNR than HY2 (data not shown). As photosynthetic electron flow is tightly controlled, the amount of reduced AtFd2 partitioned with other redox enzymes in chloroplasts could be limited. It is possible that Fd-dependent enzymes like HY2 interact with stromal Fds reduced from the pool of oxidized Fds by soluble, NADPH-bound FNR (33). This could be important for P⌽B production in the dark when the photosynthesis is abolished. Alternatively, HY2 as well as other redox enzymes could be compartmentalized to the surface of thylakoid membrane to more efficiently accept electrons from reduced Fds. Identification of the suborganelle localization of HY2 is underway. Future Studies-Although we have proposed a docking model for HY2 and AtFd2, it will be better illustrated by the crystal structure of HY2:BV-AtFd2 complex. The crystallography study is in progress. More mutagenesis data are also required to support our predicted docking mechanism. We believe that by combining mutagenesis with other protein-protein interaction technique, such as Isothermal Titration Calorimetry, details of the mechanism of electron transfer from Fd to FDBRs can be better understood. Differential localization of HY2 in sub-organelle compartments will be another important research topic in understanding the regulation of tetrapyrrole biosynthesis. Such knowledge offers the possibility of modulating the HY2 activity as well as phytochrome functions in plants.
7,646.8
2009-12-08T00:00:00.000
[ "Chemistry" ]
Weak Convex Domination in Hypercubes OPEN ACCESS The n-cube Q n is the graph whose vertex set is the set of all n-dimensional Boolean vectors, two vertices being joined if and only if they differ in exactly one coordinate. The n-star graph S n is a simple graph whose vertex set is the set of all n! permutations of {1, 2, • • • , n} and two vertices α and β are adjacent if and only if α(1)≠β(1) and α(i) ≠β(i) for exactly one i ,i≠ 1. In this paper we determine weak convex domination number for hypercubes. Also convex, weak convex, m - convex and l1-convex numbers of star and hypercube graphs are determined. Introduction Graphs considered here are connected, simple. Akers and Krishnamurthy introduced the n-star graph S n [1]. The vertex set of and β are adjacent if and only if α(1) ≠β(1) and α(i) ≠β(i) for exactly one i, i ≠ 1. The n-star graph is an alternative to n-cube with superior characteristics. Day and Tripathi have compared the topological properties of the n-star and the n-cube in [5]. Arumugam and Kala have determined some domination parameters of star graph and obtained bounds for γ, γ i , γ t , γc and γ p in n-cube for n ≥ 7 in [2]. Let G be a simple connected graph. A subset S of V is called a convex set if for any u, v in S, S contains all the vertices of every u − v geodesic in G. A subset S of V is called a weak convex set if for any u, v in S , S contains all the vertices of a u − v geodesic in G. A subset S of V is called a m -convex set if for any u, v in S,S contains all the vertices of every u − v induced path in G. A subset S of V is called al1 -convex set if it is convex and has a vertex which is adjacent to rest of the vertices of S. Maximum OPEN ACCESS Volume: 8 cardinality of a proper convex set is the convexity number of G. In a similar way we defi ne weak convex number, m -convex number and l 1 -convex is the maximum of {Con < N[x] > /x εV(G)}. A subset S of V is called a domination set if every vertex in V − S is adjacent to at least one vertex in S. A dominating set is a weak convex dominating set if it is weak convex. So far exact value of domination number for large n in Q n has not been determined. Here we determine weak convex domination number of Q n for any n. We know that γ(Q 3 ) = 2. Either {3, 5} or {1, 8} can be chosen that is diametrically opposite vertices are chosen. Therefore their distance is three and hence γ wc (Q 3 )= 4. Let γ wc (Q 3 ) set be {1, 4 . These eight vertices can be chosen in any manner from the two layers of Q 3 . Hence we observe that for Q n , 2 n−1 vertices are required for a weak convex dominating set which can be got in any manner from the two layers of Q n−1 . Now we claim that 2 n−1 is the minimum number of vertices for a weak convex dominating set in Q n . Let k + l = 2 n−1 where k,lare the number of vertices chosen in two layers of Q n−1 for a weak convex dominating set in Q n . Without loss of generality assume l < k. Let Q 1 n−1 and Q 2 n−1 denote the fi rst and second layers of Q n−1 . Choose k vertices in Q 1 n−1 in such a way that they form a weak convex dominating set in Q n−1 . . Then single vertex that dominates u and v is either u or v. Therefore weak convexity is violated between u(v) and a vertex among k vertices which is adjacent to private neighbors of u(v) in Q 1 n−1 . Thus a contradiction. If private neighbors of u and v do not form an edge in Q 1 n−1 and N(u)⋂N(v) ≠ φ then weak convexity is violated in Q 2 n−1 which is a contradiction. If private neighbors of u and v do not form an edge in Q 1 n−1 and N(u)⋂N(v) = φ then either u or v is required for domination in Q 2 n−1 . Thus weak convexity is violated between u(v) and a vertex among k vertices which is adjacent to private neighbors of u(v) in Q 1 n−1 . Thus a contradiction. Therefore, k − (l − 1)<2. Hence minimum one vertex must be included in any one of the layers of Q n−1 for a weak convex dominating set in Q n . Case (ii) None of l − 1 vertices are private neighbors of k vertices. Clearly weak convexity is violated between any vertex of Q 1 n−1 and Q 2 n−1 . Case (iii) Some of l − 1 vertices are private neighbors of k vertices. By Case (i) we get the result. Interchanging k and l we get the result for k < l. Conclusion In this paper we determined weak convex domination number for hypercube graphs. We also determined convex, weak convex, m -convex and l 1 -convex numbers of star and hypercube graphs. Other domination parameters for hypercubes are under study in our group.
1,313.2
2021-05-15T00:00:00.000
[ "Mathematics" ]
Demonstration of shift, scale, and rotation invariant target recognition using the hybrid opto-electronic correlator Previously, we had proposed a hybrid opto-electronic correlator (HOC), which can achieve the same functionality as that of a holographic optical correlator but without using any holographic medium. Here, we demonstrate experimentally that the HOC is capable of detecting objects in a scale, rotation, and shift invariant manner. First, the polar Mellin transformed (PMT) versions of two images are produced, using a combination of optical and electronic signal processing. The PMT images are then used as the reference and the query inputs for the HOC. The observed correlation signal is used to infer, with high accuracy, the relative scale and angular orientation of the original images. We also discuss practical constraints in reaching a high-speed implementation of such a system. In addition, we describe how these challenges may be overcome for producing an automated version of such a correlator. © 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement Introduction Target recognition and tracking has a wide range of applications in the modern world. Optical image recognition systems offer a fast alternative over traditional electronics-based systems. The simplest such optical system is the Vander Lugt correlator [1][2][3], which is able to compare two images using holographic filters. However, a key limitation to this technology is the use of a slow recording process for the filters. Other correlators have been designed to circumvent the recording process, such as the Joint Transform Correlator (JTC) [4][5][6][7][8][9], which uses dynamic materials to record and correlate at the same time. However, the material needed for such a correlator suffers from many practical problems, such as the need for applying a high voltage, and get damaged easily [10,11]. We recently proposed and demonstrated a new hybrid opto-electronic correlator (HOC) [12,13] that overcomes some of these limitations and replaces the JTC's nonlinear material with detectors. The advantage of such a correlator is discussed in more detail in [12]. Yet two key limitations inherent to optical target recognition remain in our originally proposed HOC architecture: the system is intolerant to changes in scale and rotation. There have been many proposals to overcome these limitations, many of which detail the implementation of coordinate transforms [14][15][16][17][18]. We recently proposed that the incorporation of the polar Mellin Transform (PMT) into the existing HOC architecture would result in a shift, scale, and rotation invariant correlator [19]. In this paper, we show the results of such an incorporation using commercially available instruments. In addition, we show that the output of a positive match can be analyzed to determine the rotation angle of the query image. Today, computers are able to detect matched images with great accuracy thanks to advances in neural networks and image recognition algorithms. However, even state of the art systems take upwards of 26 ms to detect matched features [20]. This time quickly adds up when scanning large databases or processing real-time camera feeds. Our system, as proposed using specialized circuits for the electronic components, is capable of reaching correlation times on the order of a few microseconds [12]. The HOC is not meant to replace computers, as they are capable of detecting much finer details and performing more complex algorithms. Instead, it is expected to work as a pre-processor that would filter out obvious matches and mismatches, and produce a vastly reduced set of images that may require further processing. Of course, in principle, this pre-processing could also be performed using electronic circuits, entirely removing the need for optical components. However, the current best 2D Fourier Transform (FT) electronic integrated circuits have execution times of over 6ms per image [21], highlighting the need for optical techniques. To exemplify the usefulness of the HOC, consider a database with 1 million images, 100 of which are potential matches to a query. A computer using state of the art algorithms would take 0.026 x 10 6 = 26,000 seconds = 7.2 hours to compare each database image to the query image by using neural networks. If instead one uses electronic FT's for correlation preprocessing (requiring at least two FT's per correlation), it would take 0.006 x 2 x 10 6 = 12,000 seconds = 3.3 hours to filter out the 100 potential matches, which then require a subsequent 100 x 0.026 = 2.6 seconds to process with neural networks for more detailed results. Assuming a correlation time of 5 μs, the HOC requires 5 x 10 −6 x 10 6 = 5 seconds to perform the filtering, and then 2.6 seconds for the neural network processing. It is this kind of largedatabase image processing that would benefit most from the HOC. While electronic components are generally cheaper and more robust, the difference in performance between an all-electronic and the hybrid opto-electronic approach is large enough to outweigh the disadvantages. The rest of the paper is organized as follows. Section 2 details the experimental setup and theory of operation of the system. An overview of the steps required to implement the PMT in the HOC is given in section 3. The results are presented and examined in section 4, where we show how the use of the PMT conforms to the theory. We conclude with a summary and outlook in section 5. Experimental setup and working principle of the HOC The details of the basic HOC architecture can be found in [12] and [13], while the augmentation thereof via incorporation of the PMT can be found in [19]. If commercially available components are used, the operating speed of the HOC is severely limited by the serial communication between the devices. For this reason we proposed a system called the Integrated Graphic Processing Unit (IGPU) which may allow the HOC to perform a correlation in a time scale as short as few microseconds. Much work remains to be done before the IGPU can be realized. As such, we have shown the working principle of the HOC using existing technology, without optimizing the speed of operation. Overview of PMT augmented HOC Like other optical correlators, the HOC takes advantage of the FT property of lenses. However, unlike traditional holographic correlators, it does not require a writing step where the information of the FT of the reference image is stored prior to its operation. Instead, the HOC captures the FT of the reference and query images, at the same time, on two separate arms. A Focal Plane Array (FPA) on each arm captures three intensity signals; the FT of the image, an auxiliary plane wave, and the interference between these two. The amplitude and phase information for the FT of the image is thus captured for each arm. We then subtract the intensity of the FT'd image beam and the auxiliary plane wave from the interference pattern for each arm. This yields two electronic FT-domain signals that are then multiplied together pixel-by-pixel resulting in a single output signal. By then transferring this signal back to the optical domain using an SLM, we can pass it through another lens and obtain its FT, which will correspond to the space-domain convolution and correlation of the two original images. This is further explained in section 2.3. The amplitudes of the cross-correlation and convolution produced this way depend on the relative phase of the two auxiliary plane waves. Thus, for a practical implementation of this scheme we employ a Phase Stabilizer and Scanner (PSS), which is described in more detail later on. The process as described above is able to recognize a match between a reference image and a query image in a shift invariant manner. However, it is not rotation and scale invariant. This limitation is eliminated by employing the PMT process. This involves the following additional steps in each arm before the interference with the auxiliary beams occurs. First, the FT of each image is detected with an FPA, then the amplitude of the FT is determined by taking the square root of the signal for each pixel. The resulting numbers are then converted from the rectilinear coordinates { } interfered with the auxiliary beam in each arm. More details of this process can be found in [19]. Experimental setup For this demonstration we have used a simplified version of the architecture proposed in [12]. This is illustrated schematically in Fig. 1. A continuous-wave diode-pumped solid-state laser (Verdi V2) at 532 nm is used as the light source. The laser beam starts with a diameter of 1mm, which is spatially filtered and expanded to 1" (25.4 mm). This beam is passed through a 50/50 Beam Splitter (BS) into two arms; the Image Arm and the PSS Arm. The latter leads to a mirror mounted on a Piezo-electric Transducer (PZT-1a) which redirects the beam through a shutter (S1) to a Mach-Zehnder Interferometer (MZI). The MZI, along with PZT-2, a pair of photo-detectors (MZI PD) that are separated to detect two different fringes in the MZI interference pattern, and a Proportional-Integral-Differential (PID) controller, forms a phase-stabilization system. This MZI has two BS's inserted in one path. These redirect two plane waves 1 2 ( , ) C C towards the image arms, with 1 C passing through PZT-1b. The phasestabilization system allows us to lock the phase difference between 1 C and 2 C according to a bias voltage applied to the output of the PID controller. This is discussed in greater detail in section 2.4. The image arm also passes through a shutter (S2) and is then split into the reference and query arms. Each of these two beams reflects off an amplitude modulated (AM) SLM to produce the image beams 1 ( , H 2 ) H , each of which is then directed towards a biconvex lens. The lens produces the two dimensional FT of the image at its focal plane. Each of the FT'd image beams 1 2 ( , ) M M then interferes with the corresponding plane wave prior to being detected by an FPA placed at the focal distance of the biconvex lens. For this setup we used the Thorlabs USB2.0 CMOS camera (DCC1545M), which has a resolution of 1280x1024 pixels, to perform the function of the FPA. The use of shutters allows us to choose what we detect. We can detect just the FT'd image The SLM's used for this demonstration are custom-made using Texas Instrument's DLP3000 modules. These work using Digital Micro-mirror Devices (DMD's) which rapidly move to reflect light towards and then away from a target, effectively functioning as AM SLM's. The DLP3000 modules have a physical resolution of 684 x 608 pixels, but operate in a wide aspect ratio of 854 x 480. The active area of the SLM is 0.3" (7.62 mm) and each individual micro-mirror measures 7.6 μm across. Mathematical Model of the HOC In this version of the HOC, each set of measurements ( , , taken by opening and closing the shutters as described in the previous section, using the subscript '1' to denote the reference image, and the subscript '2' for the query. The FT of each image and each plane wave can be expressed as follows: is the phase of the FT'd image beam at the FPA plane, and Ψ j is the phase of the interfering plane wave at the same point. Here, j M and j φ are functions of ( ) , , x y but j C and Ψ j are assumed to be constant on the FPA plane. The detected interference pattern between the FT of the image ( j M ) and the plane wave ( j C ) is given by: This digital signal array can be stored on an FPGA along with the signal arrays j B and 2 . j C The FPGA can then perform a subtraction to obtain: This signal can be stored for both the reference ( 1 S ) and the query ( 2 S ) image in the same FPGA and later multiplied together using four-quadrant multiplication to find the signal array S : The resulting signal can be sent to an SLM to be transferred into the optical domain using a laser. Here, the signal beam can be FT'd by passing through a biconvex lens, presenting the final output signal f S at the focal plane: Here  stands for the FT. Because j M is the FT of an image , j H we can now use the wellknown relationship between the FT of products of functions and convolutions and crosscorrelations to express more explicitly the four terms in : where ⊗ indicates two-dimensional convolution, and  indicates two-dimensional crosscorrelation. This shows that using three intensity signals (A, B, and C) from each arm we can find the correlation between the two images. In Eqs. (4) to (6) we have grouped together the factors corresponding to the plane waves 1 C and 2 C into constants (α and ). β A more explicit expression of these terms reveals the following: It is clear that the output of the HOC depends nontrivially on the phases of the plane waves at their respective FPA's. We are also only interested in the cross-correlation terms of our output signal 3 (T and 4 ); T as such it is our goal to maximize β and minimize α while maintaining both values stable. For this we have designed and implemented a PSS that is explained in the next section. Phase Stabilization and Scanning The PSS can be considered to be a specific type of optical phase-locked loop (OPLL) with the added phase scan. Currently there are very few ways to implement a stable OPLL [22][23][24], and integrated circuits that perform this task are still at the research stage. To overcome this problem, we designed a discreet OPLL that can maintain lock for some time, along with a method of quickly reestablishing optimum lock values. The HOC requires us to control the phase difference between our Reference and Query auxiliary plane waves. From Eq. (8) it is clear that β will reach its maximum value when 1 where ' m ' is an integer. In order to achieve such a value, the HOC architecture incorporates an MZI with an adjustable mirror (PZT-2) and two coupled detectors (MZI PD), as shown in Fig. 2, which is a subset of the complete apparatus shown in Fig. 1. These detectors are separated a short distance on the plane normal to the direction of propagation of the laser, which allows them to detect different fringes of the interference pattern generated in the MZI. An electronic circuit finds the difference in intensity between these detectors and converts it into a voltage that is then fed into a low noise pre-amp and then a PID controller. The output of the PID is then added to a bias voltage that allows us to control the locking point before being connected to PZT-2. This system operates under the assumption that the mirrors and the optical path lengths are very stable. For this reason, the optical table is floated and the experiment is enclosed so as to minimize air turbulence. The first plane wave ( 1 C ) is extracted from the MZI prior to the PZT, having travelled a distance 1 c L from the first BS to FPA-1a, given by: 1 1,2 2,3 3,4 4,5 5,6 6,7 c L l l l l l l = + + + + + (9) where , m n l indicates the path from element m to element n . The second plane wave ( 2 C ) is extracted after PZT-2. The total path length for this plane wave from the first BS to FPA-2 is 2 c L , given by: Without considering the effects of the optical components (BS's and mirrors) which produce constant phase shifts, the phase of each plane wave can be written as: Using this expression we can now find the phase difference to be: where Δ OE φ represents the constant difference in phase shift produced by the optical element in each path. We can also find the sum of the phases: where , ' n m l represents the path length when the relevant PZT is at its static point. We can now write: Mechanically this means that PZT-1b has to be programmed to move the exact same distance as PZT-2, but in the opposite direction. This can be achieved using a feed-forward system where an inverted version of the bias signal applied to PZT-2 is sent to PZT-1b. The PID system that controls PZT-2 receives its feedback from MZI_PD. The phase difference between the two path lengths in the MZI can be written as: This means that to lock the PID to a specific phase at MZI_PD (ΔΨ ) MZI we will have a set value of Δ , pzt l which will also lock ΔΨ. We can vary this value by use of a bias voltage that is added to the output of the PID controller [25]. As was previously shown, PZT-1a allows us to adjust the value of 1 Ψ and 2 Ψ simultaneously without changing ΔΨ . By continuously running a ramp signal at some frequency s ω on this PZT, we can scan over a wide range of phases. By applying a Low Pass This is the ideal way to operate the HOC. However, because the phase scan operates in the time domain, this method requires that all six signals ( , , j J A B and ) j C be detected simultaneously with six FPA's, and without shutters, which greatly increases the complexity of the system. As such, we did not implement the scanning segment of the PSS for the demonstration reported here. It should be noted that it is still possible to see the results of a correlation without washing out the α term, but one must be careful to distinguish between the correlation and convolution terms. One way to reach the maximum value of β for an unknown α is to run a series of known matched images through the HOC at varying bias voltages. This works as follows. One image is set as both the Reference and Query inputs. The HOC then runs a correlation, for a particular bias voltage. This will yield a match at the output of the HOC. The bias voltage is then changed within the range of operation of the PZT, repeating the correlation. The result will again be a match, but the overall output intensity will have either increased or decreased. The bias voltage is changed so as to look for the maximum intensity. This process is repeated, changing the bias in progressively smaller steps until the maximum output intensity is found. Polar Mellin transform in the HOC Due to the properties of the FT and lenses, the detection of a FT'd optical signal will be shift invariant. However, changes to the scale and rotation of the images will alter the scale and rotation of the FT, thus preventing the HOC from achieving a match. To counteract this we can instead compare images that have been pre-processed via the use of the Polar Mellin Transform (PMT). Because the PMT is, by definition, in log-polar coordinates; two identical images with different rotations will present the same PMT with a shift in the θ coordinate corresponding to the relative rotation angle between them. Similarly, any change in scale will manifest as a shift in the log-radial coordinate . ρ By performing the PMT we are essentially converting any rotation and scale changes into translational shifts. Given that the established HOC architecture is inherently shift invariant and that the PMT is very closely related to the FT, it is thus well suited for adding rotation and scale invariance into the HOC architecture, as explained in detail in [19]. The steps to obtain the PMT in an optoelectronic system are as follows: 1-Find the FT of the image. 2-Determine the amplitude of the FT. (2a-Determine the intensity of the FT. 2b-Find the square root of the intensity). 3-Perform circular DC blocking. 4-Map polar coordinates into a rectilinear plane where x and y correspond to the r and θ axes. 5-Transform radial coordinate to the logarithm of the ratio of the radial coordinate and a reference length. Steps 1 and 2a can be performed using a laser, an SLM, a FT lens, and an FPA. In this setup we used a single arm of our existing HOC architecture with the PSS shutter (S1) closed. Steps 2b-5 are then performed by a computer. The resulting PMT image is then used as an input to the HOC. By using a PMT image as a reference and converting a query image into its PMT, the HOC is able to find the correlation of the two original images in a shift, scale, and rotation invariant manner. Given that all real digital images are composed of positive integer values, their FT will always contain a high value at the center (DC). The transformation from { } , x y to { } , ρ θ of such an image will produce an output that has a non-zero value for 0. ρ = It is impossible to transform this point to the log-polar domain. To avoid this, we cut a small hole in the intensity profile of the FT at DC prior to performing the polar coordinate transformation. This is called circular DC blocking [19]. It is important that the hole be small enough not to erase important information from the non-DC area of the FT. However, making the hole very small requires high pixel density. A convenient compromise is to use a small hole of a constant size for all images. If a constant-size circular DC block is chosen, the PMT conversion process can be achieved without any complex computations. The final three steps of the PMT process are independent of the detected image and can be achieved by physically connecting an { } , x y coordinate input to a rectilinear-mapped { } , ρ θ coordinate output (neglecting the connections corresponding to the circular DC block hole). In this way a single Application Specific Integrated Circuit (ASIC) could perform the PMT with the help of a FT lens. If an FPA and an SLM are built into this ASIC, the HOC would be able to achieve shift, scale, and rotation invariance using regular non-PMT images by inserting the ASIC at each image arm as shown in Fig. 3. Ideally we would expect the external SLM to be connected to either a camera or a computer to provide the non-PMT images. It would also be beneficial to incorporate such a system only at the query arm as shown in Fig. 3, with the reference arm using a holographic memory disk instead of an SLM to store a large database of PMT reference images. For this experiment, a grayscale image of an F-22 Raptor fighter jet was chosen for its excellent contrast, unique shape, and real-world value. Prior to running the experiment, the HOC was calibrated to its optimum bias voltage by using the method described in section 2.4 of this document. Experimental results The original reference image is shown in Fig. 4(a). The query image shown in Fig. 4(b) has been shifted and is scaled by a factor of 0.5 with a rotation of 48.25° counterclockwise with respect to the reference. The detected FT's of these two images are shown in Figs. 4(c) and 4(d) respectively. Because the query image is scaled, its FT is larger than the reference while also presenting a rotation. Because of these two factors, the HOC was unable to detect a match, producing an almost flat output signal 2 f S in Fig. 4(e). times larger than that of Fig. 4(e), indicating a successful correlation. On Fig. 5(b) we have added a red horizontal line that marks the value of θ that corresponds to 0 θ = in Fig. 5(a). This line shows the translational shift of the PMT caused by the rotation of the original query image. The section of the PMT that corresponds to the top of Fig. 5(a) has looped around to be under this red line. To complement these results, a simulation using the same input images was run. This is shown in Fig. 6, corresponding to the ideal reference PMT, ideal query PMT, their ideal FT's, and the simulated HOC output 2 f S . In Fig. 6(b) we have added a similar red line to the one in Fig. 5(B), this time corresponding to 0 θ = in Fig. 6(a). By measuring the distance in pixels between the bottom of the PMT and the red line, recalling that the full vertical axis represents 360°, we can estimate the rotation of the query image to be 48 ≈°, which is close to the real rotation of 48.25°. Similarly, the distance between the central peak of the output signal and the two lateral peaks in Figs. 5(e) and 6(e) has been marked with a red line. This is located at 2.3 rad θ = , which is equivalent to a rotation of 48.22°. Conclusions and outlook We have demonstrated that an HOC built using commercially available components and incorporating the PMT is able to find a match in a shift, scale, and rotation invariant manner, yielding an output that is ~15 times larger when a match is found vs when it is not found (without the PMT). Furthermore, the relative rotation of the query image with respect to the reference image in a match can be found in the output signal by measuring the distance from the central peak to one of the two lateral peaks. We have also shown that the behavior of the PMT-augmented HOC aligns with the theory by presenting simulated results that correspond to our experiment. The development of the PMT-HOC can be categorized in three stages. In stage 1, we have demonstrated the functionality of the system by manually using a computer to perform the electronic processing. In stage 2, the PMT's of images and the mathematical processes required can be performed by an FPGA, thus fully automating the system. In stage 3, all of the signal processing can be done by using specially designed integrated circuits that can be incorporated into the FPA's and SLM's, forming an IGPU. This stage would allow for highspeed automation of the system, performing correlations in a time scale as short as a few microseconds.
6,182.4
2019-05-28T00:00:00.000
[ "Engineering", "Physics", "Computer Science" ]
A Frequency-Domain Data-driven Adaptive Iterative Learning Control Approach: With Application to Wafer Stage —The feedforward control is becoming increasingly important in ultra-precision stages. However, the conventional model-based methods can not achieve expected performance in new-generation stages since it is hard to obtain the accurate plant model due to the complicated stage dynamical properties. To tackle this problem, this paper develops a model-free data-driven adaptive iterative learning approach that is designed in the frequency-domain. Explicitly, the proposed method utilizes the frequency-response data to learn and update the output of the feedforward controller online, which benefits that both the structure and parameters of the plant model are not required. An unbiased estimation method for the frequency response of the closed-loop system is proposed and proved through the theoretical analysis. Comparative experiments on a linear motor confirm the effectiveness and superiority of the proposed method, and show that it has the ability to avoid the performance deterioration caused by the model mismatch with the increasing iterative trials. a direct influence on the product quality [2], [3], which leads to higher performance requirements for the stages , such as higher tracking accuracy, larger velocity and acceleration. In the control system of the precision motion stages, the feedback control is usually used to suppress the unstructured external disturbance and the system uncertainty, while the feedforward control is used to compensate for the orderly disturbance, such as thrust ripple [4]. Usually, the control bandwidth is expected to be as high as possible, but it is limited by the mechanical resonance, the time delay of the drive system, the measurement bandwidth and so on. As a result, the feedforward control is necessary for high-performance motion stages as it can lower the requirements for the feedback control loop [5]. In a word, improvement in feedforward control is a significant step toward meeting higher performance in the next generation industrial precision motion stages [6]- [8]. As a popular feedforward method, iterative learning control (ILC) approach is widely applied in various scenarios [9]. The ILC could improve the tracking performance of the servo system by learning from the data gathered over the past trials [10], which is particularly effective in practical cases with the repetitive trajectory and the external disturbance. The ILC methods can be summarized into model-free ILC and modelbased ILC according to whether the plant model is required. Model-based ILC methods make use of model information so that it could achieve good tracking performance and high convergence speed. The most well-known model-based ILC methods applied to precision motion control such as the wafer stage control are mainly represented by the inversion-based ILC methods [11]- [13], Q-ILC [14], projection-based ILC [15], and so on. But the model used in the design process needs to be chosen properly as it plays an important role in the convergence properties. However, there exists flexible dynamical behavior like the small damping or the low resonant frequency in the next-generation precision motion stages [16], so the model-based ILC could hardly achieve the expected performance. In contrast to model-based ILC, model-free ILC methods seem to be more applicable for the future flexible precision motion stages since they do not require a plant model. However, the system information is not fully utilized. So the conventional model-free ILC [17]- [19] needs more iterative trials to tune the parameters for good performance and could hardly achieve the same performance as those modelbased methods. Therefore, a tradeoff has to be made between requiring no plant model and fully utilizing the system information to achieve higher performance of both tracking and convergence. To tackle this problem, the frequency-domain based ILC (FD-ILC) attracts more extensive research interest as it can make full use of the frequency-response information of the system, and no parameterized plant model is required. There are several achievements in FD-ILC. In [20], an FD-ILC method was proposed for the AFM piezoscanner and experimental results showed that the proposed method can significantly reduce the dynamic coupling errors. However, a small learning gain was chosen in [20] to keep the stability in the presence of the noise, which results in slower convergence speed. In [21], an inversion-based FD-ILC updating the inverse model by using the input-output data was presented for an AFM system and experimental results confirmed its ability to improve the tracking performance and convergence speed. Although there existed an updating for the learning law in [21], this proposed algorithm could not achieve an unbiased estimation of the inverse model and its estimation would certainly be affected by the noise. Additionally, there are some ILC methods designed in the frequency domain [22]- [24], but their learning laws were calculated based on the nominal plant model which would probably result in a limitation on tracking performance due to the model error. Aforementioned analyses motivate the paper to propose a model-free frequency-domain adaptive ILC method (FD-AILC). The contribution of this paper is threefold. First, a theoretical framework is developed for frequency-domain ILC with adaptive updating method. During the iterative process, the proposed FD-AILC method not only updates the ILC output, but also updates the ILC learning law. Furthermore, the criteria for accelerating convergence are proposed, which enables the possibility of achieving higher performance and increasing the convergence speed simultaneously. Second, an unbiased estimation method to solve the influence of measurement noise is proposed. Finally, an application to a linear motor of the wafer stage is presented to compare the proposed method with the model-based ILC methods, which demonstrates the effectiveness and superiority of the proposed method. The rest of the paper is organized as follows. The problem statement is formulated in Section II. The frequency-domain data-driven adaptive ILC method and convergence analysis are presented in Section III. Simulation and Experimental results are presented with discussions in Section IV. The conclusions are drawn in Section V. II. PROBLEM STATEMENT The schematic diagram of the ILC approach widely applied to precision motion stage is shown in Fig. 1, where r denotes the system reference trajectory, y r denotes the real system trajectory, d denotes external disturbance, and n denotes the measurement noise. C L is the iterative learning controller, C f b is the feedback controller, and P is the plant model. It is noted that the system reference trajectory r is set to be repetitive through the whole iterative process so there is no superscript for r. From the basic principle of ILC, it can be obtained as follows. where, e k f f is the ILC controller's output after the k-th iteration, and e k is the system tracking error after the k-th iteration. Based on the former analysis, the conclusion can be drawn that the inversion-based ILC has the ability to improve the convergence speed. Therefore, this paper will focus on analyzing the ILC approach based on the inversion-model and further improving it by mapping the designing method from the time domain to the frequency domain. When C L = ((C f b · P )/(1 + C f b · P )) −1 , the system can realize one-step convergence theoretically. However, in practice, there exists uncertainty in the controlled object and noise in the system. Filters are inevitably involved in the system, which causes that the system is unable to realize one-step convergence and the tracking accuracy is limited. Therefore, it is considered to map the inversion-based ILC method to the frequency domain and design the iterative learning controller based on frequency-response data, namely as frequency-domain ILC (FD-ILC). It is defined as where,T (ω) is the estimation of the closed-loop system frequency response which is obtained from the frequencyresponse test on the actual system, and ρ(ω) is the positive real regulator at the frequency point ω, whose range is ρ(ω) ∈ (0, 1]. Therefore, where ω is selected according to the frequency points obtained from FFT calculation to the reference trajectory r. e k−1 (ω) is FFT of the time-domain error e k−1 (t). The convergence condition of the FD-ILC method can be referred to [20] in detail. Compared with inversion-based ILC method based on transfer function (TF-ILC), the FD-ILC method has the advantages that no transfer function model is required, no additional system time-delay compensation is needed, and the ideal filter can be realized by using frequency truncation. The FD-ILC method can effectively avoid the problem that the accurate plant model is difficult to obtain. However, using the iterative learning law shown in (2) and (3), the obtained FD-ILC output is not completely accurate, which is due to the following reasons. Firstly, there exists external disturbance and measurement noise when testing the system frequency response. Secondly, it is hard to describe the system's high-order uncertainty. And thirdly, the frequency points obtained in the frequency-response test and the frequency points of the reference trajectory cannot be aligned precisely. So it needs to estimate the compensation value at the points of the required reference frequency through interpolation. That leads to a mismatch between the real value and the estimation value, which would introduce error into the FD-ILC output. Therefore, given the drawbacks of FD-ILC method, a frequency-domain data-driven adaptive ILC method (FD-AILC) is proposed, which can effectively make up for shortcomings of FD-ILC by updating the learning law using the data-driven adaptive method. III. FREQUENCY-DOMAIN DATA-DRIVEN ADAPTIVE ILC The error of the closed-loop system frequency response is assumed to be multiplicative, and it is defined as where, According to the Fig. 1, there is Then e k can be deduced as where, S = 1/(1 + C f b P ) is the system sensitivity function and G P is defined as G P = P/(1 + C f b P ). (2) and (3) are substituted into (6) and the result below can be obtained. Meanwhile according to (6), e k−1 can be written into The external disturbance d k can be rewritten into d k = d r + d k n , where d r denotes the repetitive disturbance and d k n denotes the nonrepetitive disturbance of the k-th iterative trial. Combining (7) with (8), there is where the expression ofñ k (ω) is When ρ(ω) = 1 and there is no system model error, that is ∆T (ω) = 1, the system can realize one-step convergence. But there must be errors of the frequency response and ∆T (ω) cannot be equal to 1. If |∆T (ω)| approaches 1, then ρ(ω) can be selected to be as large as possible, and the convergence rate of the algorithm will be faster. Therefore, |∆T (ω)| need to be improved as much as possible to approach 1 so that it can improve the convergence rate. Based on the above analysis, the FD-ILC method is improved to updateT (ω) in addition to the output of the ILC controllers during each iteration. In this way, more accurate frequency-response data can be obtained in the iterative process, which can improve the control performance effectively. And the iterative learning law is modified as If there is no measurement noise n in the system, it can be deduced from (9) that Therefore the estimate value (∆T k−1 (ω)) of ∆T k−1 (ω) is accurate, and it can be deduced that the tracking error of system after (k+1)-th iteration is e k+1 (ω) = 0. Whereas, in practice, there is the measurement noise n in the system, so there is It can be seen from (13) that, the estimated value of ∆T k−1 (ω) is affected by the measurement noise and is not unbiased. In order to make the estimation of ∆T k−1 (ω) unbiased, the paper proposes a frequency-domain data-driven adaptive ILC (FD-AILC) and its improved iterative learning law is where, e k−1 (ω) is the tracking error obtained from the first run of system under the ILC output e k−1 f f (ω), and e k−1,2 (ω) is the tracking error obtained from the second run of system under the ILC output e k−1 f f (ω). In another word, when there is e k−1 f f (ω), the system runs twice during the same iteration, then e k−1 (ω) and e k−1,2 (ω) are obtained respectively. In addition, only e k−1 (ω) is used to calculate the next iteration's ILC output e k f f (ω). Furthermore, under the conditions in theorem 1, using the iterative learning control updating formula (14), the FD-AILC method can make the estimation of T (ω) unbiased. 2) The samples of both n(ω) and d n (ω) are independent of each other; 3) n(ω) is independent of r and d, and d n (ω) is independent of r, n and d r . Proof : Firstly, it is defined that e k (ω) of second formula in (14) is substituted by (9) and According to (6), there is Substituting (17) into (16), there is From condition 2) and 3) of theorem 1, it can be concluded that According to the condition 1) of theorem 1, it can be concluded that Therefore, it can be deduced that The theorem 1 is proved. Similar to all other ILC method, according to (9) it is easy to get an expression showing its convergence, as Obviously, this convergence criterion is easy to be met in practice. And the updating of the closed-loop frequencyresponse data cannot be conducted ceaselessly. Therefore, it is necessary to propose a cut-off condition for accelerating convergence. Thus, Theorem 2 is proposed to explain sufficient condition for the algorithm to speed up convergence. Theorem 2: Under the condition that the updating formula (14) is adopted and theorem 1 is satisfied, and assuming the noise n satisfies the condition of |ñ(ω)| ≤ W (ω), then the sufficient condition for the algorithm to ensure that the convergence rate of k-th iteration is faster than that of (k-1)- where, A(ω) and B(ω) are Proof : The accelerated convergence can be depicted as According to equation (9), above expression can be rewrit- According to (9), there is Combining the definition of ∆T k (ω) in (4) and the first formula in the updating law (14), the detailed derivation is omitted due to the space restrictions so the derived result is as follows. ∆T k−1 (ω) can be obtained through (27). Substituting ∆T k−1 (ω), ∆T k−1 (ω) of (14), (27) and (28) into (26), it can be obtained that For the left and right sides of above expression, there is It is noted that in (29) and (30), (ω) of all the variables are omitted due to the space restrictions. From (30), it can be seen that M < N is the sufficient condition for (29) to be true. As a result, (23) and (24) can be deduced from M < N . The unbiased estimation and conditions for accelerating convergence of the FD-AILC method have been discussed in the above contents. It is noted that Theorem 2 can be used for deciding whether updateT k (ω), that is stopping updating of T k (ω) when the sufficient condition is not satisfied. Remark 1: If the external disturbance only consists of repetitive disturbance, W (ω) in (23) will be the supremum of |n(ω)|. The detailed algorithm flow of FD-AILC is given below. Step 1: The system is run independently twice with the same ILC output e i f f (t), and the two data of tracking error e i,1 (t) and e i,2 (t) are obtained repectively; Step 2: The FFT calculations are done for the data of e i,1 (t) and e i,2 (t), and their corresponding frequency spectrum e i,1 (ω) and e i,2 (ω) are obtained; Step 3: Decide whether updateT i (ω) or not according to the theorem 2, if yes enter Step 4 , otherwiseT i (ω) =T i−1 (ω) is set and enter Step 5 ; Step 4: According to the updating method (14), the closedloop system frequency response is updated to obtainT i (ω); Step 5: The ILC output e i+1 f f (ω) is updated by using the equation Step 6: The IFFT calculations are done for the data of e i+1 f f (ω) to obtain e i+1 f f (t); Step 7: i = i + 1 is set and return to Step 1 . IV. RESULTS To illustrate the proposed FD-AILC method and evaluate its validity, numerical simulation and experimental test on a wafer stage are implemented in this section. 1) Simulation Setup The plant model in simulation is designed as follow, which is one of the most common dynamical models of the mechanical systems, where P (s) is the transfer function of the controlled object. To more visibly observe the effectiveness and superiority of the proposed method, the controlled object is designed with lower flexible modal frequency and smaller mass which can depict the low-weight flexible property of the next-generation precision stage. So the parameters of the plant model in simulation are set as m = 5.0Kg, K = 0.09, ξ = 0.01 and ω n = 2π · 500.5 respectively. The sampling period of the system is T s = 200µs. In addition, the time delay of the system is set as τ = 200µs, which equals 1T s . The control system is designed according to Fig. 1, where the feedback controller is a PI controller with a lead correction shown in (32). The system closed-loop bandwidth is defined as the frequency point at −3dB of the complementary sensitivity function. So the bandwidth of the control system is 91Hz. where s denotes the Laplace operator. 2 shows the curves of the reference trajectory and external disturbance used in simulation. The reference trajectory in simulation is selected as a fourth-order multisegment polynomial trajectory, with constraints on first to fourth derivatives. The parameters are the displacement s = 0.28m, the maximum velocity v = 1m/s, the maximum acceleration a = 40m/s 2 , the maximum jerk J = 3000m/s 3 , and the maximum snap D = 5 × 10 7 m/s 4 respectively. Because in most applications that this paper concerns, the system external disturbance exhibit repetitive property during the iterative process, the disturbance used in simulation is designed as a repetitive disturbance force. The data of the disturbance consists of the cogging force and the cable force, which is collected on a practical stage driven by the linear motor. The measurement noise is set as Gaussian white noise whose amplitude is 1nm. Additionally, it is necessary to design an initial frequency-response dataT 0 (ω) used for the first iterative trial. In the simulation test, the real value of T (ω) is known. SoT 0 (ω) is designed asT 0 (ω) = c rand (ω) · T (ω), where c rand (ω) is a random number distributed uniformly in (0.75, 1.25). Apart from above setup, the positive real regulator ρ(ω) in the updating formula (14) is selected as 0.7. To further illustrate its superiority compared with other ILC method, the FD-ILC mentioned in Section II and the ILC method based on the transfer function (TF-ILC) proposed in [11] are also tested in simulation. To provide a fair comparison, the sameT 0 (ω) is used forT (ω) of (2) for FD-ILC. As for TF-ILC method, the learning law C L T F is shown as follow, where s denotes the Laplace operator, K T F is the learning gain set as 0.7, Q T F (s) is a second order low-pass filter with cut-off frequency of 2000Hz and damping ratio of 1, and T (s) is set as the transfer function of the real closed-loop system. 2) Simulation Results Firstly, an evaluating criterion Error-J is defined to present the 2-norm of the tracking error, that is Under the same simulating conditions, FD-ILC, TF-ILC and the proposed FD-AILC are performed for 10 iterations respectively. Fig. 3 shows the comparative result of the timedomain tracking error and Fig. 4 shows the comparative result of Error-J. It can be observed from Fig. 3 that after performing the same iterations, the tracking errors of the proposed method are smaller than those of the FD-ILC and TF-ILC, which proofs the effectiveness of FD-AILC and verifies that the proposed method is better than another two methods in improving the system tracking performance. Fig. 4 shows that after 3iterations through the FD-AILC the system tracking error can achieve convergence to the minimum value, while both FD-ILC and TF-ILC need about 6-iterations to achieve convergence. Therefore, result in Fig. 4 further demonstrates that introducing an adaptive algorithm for updating the iterative learning law facilitates improving the convergence speed. In the simulation test, the estimating result of the closedloop frequency response can be observed, because the real value of T (ω) is known. To further evaluating the proposed method, the estimation result of T (ω) is shown in Fig. 5, and the updated ∆T (ω) is shown in Fig. 6 and Fig. 7. ∆T (ω) is defined in (4). From Fig. 5, it can be observed that the curve ofT (ω) becomes smooth as the increasing of the iteration, which illustrates that FD-AILC method can effectively updateT (ω) and then help improve convergence speed. Seen from Fig. 6 and Fig. 7, instead of updating the data of the whole frequency band, the algorithm updates parts of ∆T (ω). The updated data are mainly distributed in the frequency band of 0 ∼ 200Hz and 480 ∼ 520Hz. Above result needs to be analyzed combining with the feedback system closed-loop frequency response showing in Fig. 8. From Fig. 7 and Fig. 8, it can be concluded that using the proposed method can update the data of ∆T (ω) corresponding to the frequency bands in the red rectangular of Fig. 8. The amplitudes of the system tracking errors corresponding to the frequency bands outside the red rectangular are too small which is close to the measurement noise. As a result, according to the updating cut-off condition in (23) the proposed datadriven adaptive algorithm does not update the frequency response at the frequency that its corresponding tracking error is small enough. Therefore, the above analysis further illustrates the effectiveness of the proposed adaptive algorithm in improving the frequency-response data used in feedforward compensation. 1) Experimental Setup To better illustrate the effectiveness of the proposed method, experiments were performed on a linear motor of the wafer stage. It is noted that the controlled object of the experiment is different from the simulation, because the practical experimental platform is without the property of low-weight and lower flexible modal frequency. But the comparative experimental results can still illustrate the strength of the proposed method. The experimental setup is shown in Fig. 9. The real-time operating system is selected as VxWorks. The mainboard and the motion control (MC) card are integrated into a VME64x card cage from the Germany company ELMA. The flow of the control signal and data signal between the PC and the control card cage are realized through the network cable and serial port respectively. The MC card sends the control command to the motor driver via the fiber. Similarly, the data of the linear encoder is transmitted through the fiber. The motor driver can make the bandwidth of the current-loop achieve 2000Hz and its peak current is 60A. The wafer stage is mounted on an air bearing with 400 kP a air pressure. The position sensor is a Heidenhain linear incremental encoder with the effective resolution of 0.05µm and the maximum velocity of 0.3m/s. And the control methods are implemented by C language on a digital signal processor (DSP). The sampling period is T s = 200µs. The feedback controller C f b is a PI controller with a lead correction that is similar to (32). The closed-loop system is excited by a preset time series, which structure is the same as the reference trajectory in the simulation test as shown in Fig. 2. Its parameters are s = 0.1m, v = 0.25m/s, a = 10m/s 2 , J = 800m/s 3 , D = 1×10 5 m/s 4 and H = 1×10 8 m/s 5 respectively. Notably, to keep the same starting point of the linear motor in every iteration, the motor does the reciprocating motion, where the reciprocating trajectory is the same. Similar to the simulation test, the proposed FD-AILC, FD-ILC and TF-ILC are compared. For TF-ILC method, an approximate model of the plant is required so a dual integral model is adopted to fit the measured plant model shown in Fig. 10. The fitted result is expressed as G est = 1 34.3775s 2 . The learning law of TF-ILC in the experiment is the same as that in simulation and its parameters are set as: 1)the learning gain K T F = 0.7; 2)the second-order low-pass filter with a cutoff frequency of 100Hz and with a damping ratio of 1. Also, it is worth noting that the initial frequency responseT 0 (ω) is required both in FD-AILC and FD-ILC. In FD-ILC, it will be used for all iterations, whereas for FD-AILC, it will only be used for the first iteration and the data will be updated in other iterations. The inversion of the initial frequency responsê T −1 0 (ω) is obtained through a frequency-response test on the practical stage and its result is shown in the Bode diagram in Fig. 11, where the blue line is the practically measured data ofT −1 0 (ω). Because the measured high-frequency data are greatly affected by the measurement noise, only the frequency range data showed as the red line in Fig. 11 are used in the iterative methods. 2 ) Experimental Results The FD-AILC is performed for 5 iterations on the experimental stage and the result is shown in Fig. 12. From Fig. 12, a conclusion can be drawn that the tracking error of the system drops significantly with the increasing of the iteration times, which confirms that the proposed method is effective in improving the tracking performance. Since there is no feedforward compensation when running the system for the first time, the tracking error is relatively large which is over 50µm. After the first iteration, the tracking error reduced to about 10µm. However, the effectiveness of updating the learning law is unable to be verified through the result in Fig. 12. Therefore, comparative experiments are required to further prove the effectiveness and superiority of the proposed method. Then the FD-ILC method and TF-ILC method are run for 5 iterations as well under the same experimental conditions and results are shown in Fig. 13, Fig.14 and Fig.15. It is noted that Error-J is defined in (34). In Fig. 12, Fig. 13 and Fig. 14, Iteration = 0 presents the system is run without any iterative learning output. From the first iteration, the FD-ILC and TF-ILC use the frequencyresponse dataT −1 0 (ω) shown in Fig. 11 and transfer function model fitted by the dual integral model respectively to calculate the ILC output. Whereas, the proposed FD-AILC will useT −1 0 (ω) only for the first iterative trial and update it through the adaptive learning law shown in (14) according to conditions in (23) for other trials. From these experimental results, the following conclusions and analysis can be drawn. 1) After the first iteration, the maximum tracking error of both the FD-AILC and FD-ILC are very close and below 15µm, which is on account of the same frequency-response data used in the first iteration. Whereas the maximum tracking error of the TF-ILC method is about 50µm, which is over three times bigger than those of the other two methods. This observation is due to that the approximation model of the controlled object used in TF-ILC is unable to describe the real characteristics of the system accurately, which proves that using the frequency-response information can effectively avoid the above problem. 2) From Fig. 14, it can be confirmed that TF-ILC is helpful to improve the tracking accuracy after several iterations. However, the evaluation indicator Error-J does not decrease for all the time and it seems to tend to enlarge. The problem of the model mismatch for the TF-ILC method is mainly responsible for the above phenomenon. The model mismatch would probably lead to that the system does not satisfy the convergence condition of the iterative learning algorithm at some frequency points, which further results in that the system tracking error would have an upward trend. 3) From Fig. 13, it can be concluded that both after the second iteration and third iteration, the tracking error of the FD-AILC method is smaller than that of the FD-ILC method, which can demonstrate that updating the iterative learning law benefits to improve the tracking performance. 4) From Fig. 13 and Fig. 14, it can be observed that after 3 iterations, using the proposed method the maximum tracking error of the system can converge to about 7µm, whereas using the FD-ILC the maximum tracking error only converges to about 15µm. Fig. 14 further illustrates that the proposed FD-AILC is better in improving the convergence speed and verifies the effectiveness of the proposed adaptive iterative law. 5) From Fig. 14, it can be noted that the Error-J of the FD-ILC is slightly larger than that of the FD-AILC and it has an upward trend after 3 iterations, whereas in simulation the curve of Error-J for FD-ILC does not show this trend. Analyzing this founding, there are two possible reasons responsible for it. Firstly, similar to conclusion 2) it is caused by the mismatch of the frequency-response data. For FD-ILC, using an inaccurate frequency-response data could cause incorrect feedforward compensation value of some frequency points. This mismatch of the frequency-response data will result in slight deterioration of the compensation. Another reason is that in the experimental tests, there are frequency-response data which are not satisfying the condition of convergence at some frequency points, which would also lead to slight deterioration of the compensation. As to the FD-AILC, it can avoid the above possible problems through the adaptive iterative law, which further verifies the ability of the proposed method for keeping the system stability. 6) Fig. 15 shows the experimental result ofT (ω) using FD-AILC method. The curve of the estimated result looks smoother than the curve of the measured data, especially in the low-frequency range. This observation can verify the validity of FD-AILC on self-adaptive updating the frequency-response model to some extent. However, the improvement ofT (ω) is not so significant due to that the practical experimental platform is not with the complicated low-frequency characteristics. Consequently, the proposed FD-AILC can effectively improve both the tracking accuracy and the convergence speed, as well as avoiding performance deterioration caused by the model mismatch. In addition, since there is no plant model required, the proposed method is helpful to reduce the workload of designing the control system. V. CONCLUSION This paper addresses practical problems of the future lightweight flexible stages, including the high requirement of the motion performance, the fast convergence rate and the complicated low-frequency characteristics. First, a modelfree frequency-domain data-driven adaptive ILC method has been established, which enables the possibility of improving the tracking performance by updating iterative learning law during the iterative process adaptively. Theoretical analysis indicates the updating algorithm could obtain an unbiased estimation of the frequency response. Subsequently, the criteria for accelerating convergence are derived. The numerical simulation and experimental results with comparison fully illustrate the benefits of the proposed model-free ILC approach, listed as (1)its ability to achieve higher tracking accuracy; (2)its superiority to increasing convergence speed; (3)its advantage of reducing the workload of designing the control system. Finally, the future work can be predicted towards solving problems of tracking control with non-repeated trajectory, which is more general in industrial applications.
7,372.4
2021-10-01T00:00:00.000
[ "Engineering" ]
Ultrasonic-Assisted Extraction of Xanthorrhizol from Curcuma xanthorrhiza Roxb. Rhizomes by Natural Deep Eutectic Solvents: Optimization, Antioxidant Activity, and Toxicity Profiles Xanthorrhizol, an important marker of Curcuma xanthorrhiza, has been recognized for its different pharmacological activities. A green strategy for selective xanthorrhizol extraction is required. Herein, natural deep eutectic solvents (NADESs) based on glucose and organic acids (lactic acid, malic acid, and citric acid) were screened for the extraction of xanthorrhizol from Curcuma xanthorrhiza. Ultrasound-assisted extraction using glucose/lactic acid (1:3) (GluLA) gave the best yield of xanthorrhizol. The response surface methodology with a Box–Behnken Design was used to optimize the interacting variables of water content, solid-to-liquid (S/L) ratio, and extraction to optimize the extraction. The optimum conditions of 30% water content in GluLA, 1/15 g/mL (S/L), and a 20 min extraction time yielded selective xanthorrhizol extraction (17.62 mg/g) over curcuminoids (6.64 mg/g). This study indicates the protective effect of GluLA and GluLA extracts against oxidation-induced DNA damage, which was comparable with those obtained for ethanol extract. In addition, the stability of the xanthorrhizol extract over 90 days was revealed when stored at −20 and 4 °C. The FTIR and NMR spectra confirmed the hydrogen bond formation in GluLA. Our study reported, for the first time, the feasibility of using glucose/lactic acid (1:3, 30% water v/v) for the sustainable extraction of xanthorrhizol. Introduction Curcuma xanthorrhiza of the family Zingeberaceae is a rhizomatous plant that originates from Indonesia and is cultivated throughout tropical areas.The rhizomes of C. xanthorrhiza have been popularly used in Indonesian traditional medicine for a long time as a tonic and for the treatment of different diseases, including liver and stomach diseases [1].Various bioactive compounds have already been identified in C. xanthorrhiza rhizomes, namely diarylheptanoids, phenolics, and terpenoids [2].Of these, xanthorrhizol, which is a bisabolene sesquiterpenoid, is found to be the major bioactive compound in the rhizome [3].Xanthorrhizol has received attention for its wide-ranging biological activities such as anticancer [4], antimicrobial [5], anti-hyperglycemia [6], anti-inflammatory, and antioxidant activities [7].Therefore, it is important to develop an efficient extraction method for xanthorrhizol.In addition to xanthorrhizol, curcuminoids (Figure 1) are present in significant abundance in the rhizome of C. xanthorrhiza [8].Various studies have reported the use of natural deep eutectic solvents (NADESs) for the extraction of curcuminoids from different curcuma species [9,10].However, the selective extraction of xanthorrhizol over curcuminoids has not been reported yet. Molecules 2024, 29, x FOR PEER REVIEW 2 of 20 Various bioactive compounds have already been identified in C. xanthorrhiza rhizomes, namely diarylheptanoids, phenolics, and terpenoids [2].Of these, xanthorrhizol, which is a bisabolene sesquiterpenoid, is found to be the major bioactive compound in the rhizome [3].Xanthorrhizol has received attention for its wide-ranging biological activities such as anticancer [4], antimicrobial [5], anti-hyperglycemia [6], anti-inflammatory, and antioxidant activities [7].Therefore, it is important to develop an efficient extraction method for xanthorrhizol.In addition to xanthorrhizol, curcuminoids (Figure 1) are present in significant abundance in the rhizome of C. xanthorrhiza [8].Various studies have reported the use of natural deep eutectic solvents (NADESs) for the extraction of curcuminoids from different curcuma species [9,10].However, the selective extraction of xanthorrhizol over curcuminoids has not been reported yet.To date, the extraction of xanthorrhizol has mostly been performed based on conventional techniques, such as maceration, percolation, and Soxhlation [6,11].Organic solvents are largely applied for this purpose.The use of organic solvents causes several issues as they are often toxic, flammable, explosive, and nonbiodegradable and, thus, hazardous to health and the environment. Growing concern for the environment and the need to obtain less contaminated products due to hazardous residues of organic solvent have stimulated the development of greener extraction processes.Recently, natural deep eutectic solvents have gained recognition as an alternative to organic solvents.First reported by Abbot et al. [12], deep eutectic solvents consisted of hydrogen bond acceptor (HBA) and hydrogen bond donor (HBD) components, which, upon mixing, bring about the formation of hydrogen bond interactions among components, leading to a liquid eutectic mixture.Different combinations of HBA and HBD components allow for the adjustment of solvent affinity to extract specific bioactive compounds.NADESs have been successfully applied for the extraction of different types of bioactive compounds, including flavonoids [13], anthocyanins [14], phenolics [15], and triterpenes [16]. Due to the high pharmacological potency of xanthorrhizol, together with its high abundance in the rhizomes of C. xanthorrhiza (accounting for 64.38% in the rhizome oil) To date, the extraction of xanthorrhizol has mostly been performed based on conventional techniques, such as maceration, percolation, and Soxhlation [6,11].Organic solvents are largely applied for this purpose.The use of organic solvents causes several issues as they are often toxic, flammable, explosive, and nonbiodegradable and, thus, hazardous to health and the environment. Growing concern for the environment and the need to obtain less contaminated products due to hazardous residues of organic solvent have stimulated the development of greener extraction processes.Recently, natural deep eutectic solvents have gained recognition as an alternative to organic solvents.First reported by Abbot et al. [12], deep eutectic solvents consisted of hydrogen bond acceptor (HBA) and hydrogen bond donor (HBD) components, which, upon mixing, bring about the formation of hydrogen bond interactions among components, leading to a liquid eutectic mixture.Different combinations of HBA and HBD components allow for the adjustment of solvent affinity to extract specific bioactive compounds.NADESs have been successfully applied for the extraction of different types of bioactive compounds, including flavonoids [13], anthocyanins [14], phenolics [15], and triterpenes [16]. Due to the high pharmacological potency of xanthorrhizol, together with its high abundance in the rhizomes of C. xanthorrhiza (accounting for 64.38% in the rhizome oil) [11] and potential nutraceutical application, we proposed the use of NADESs as an alternative solvent for the selective extraction of xanthorrhizol from the rhizomes of C. xanthorrhiza.In the present study, three types of NADESs using glucose as a hydrogen bond acceptor and organic acids (lactic acid, malic acid, and citric acid) as hydrogen bond donors (Figure 2) were synthesized, characterized, and used in the extraction.Following the identification and characterization of the most promising NADESs, optimization of the extraction was conducted using the response surface methodology (RSM) by optimizing the extraction parameters, namely the solid-to-liquid (S/L) ratio, water content, and duration of extraction.NMR and SEM analyses were conducted to reveal the mechanisms of extraction by NADESs.A comparative study concerning the extraction yields, phytochemical profiles, protection effect against DNA damage, and stability of xanthorrhizol extracts was conducted via a conventional extraction method using ethanol (96%). Molecules 2024, 29, x FOR PEER REVIEW 3 of 20 [11] and potential nutraceutical application, we proposed the use of NADESs as an alternative solvent for the selective extraction of xanthorrhizol from the rhizomes of C. xanthorrhiza.In the present study, three types of NADESs using glucose as a hydrogen bond acceptor and organic acids (lactic acid, malic acid, and citric acid) as hydrogen bond donors (Figure 2) were synthesized, characterized, and used in the extraction.Following the identification and characterization of the most promising NADESs, optimization of the extraction was conducted using the response surface methodology (RSM) by optimizing the extraction parameters, namely the solid-to-liquid (S/L) ratio, water content, and duration of extraction.NMR and SEM analyses were conducted to reveal the mechanisms of extraction by NADESs.A comparative study concerning the extraction yields, phytochemical profiles, protection effect against DNA damage, and stability of xanthorrhizol extracts was conducted via a conventional extraction method using ethanol (96%). NADES Preparation In the present study, three glucose-based NADESs were evaluated for the extraction of xanthorrhizol from rhizomes of C. xanthorrhiza.In these NADES systems, glucose was combined with lactic acid, malic acid, and citric acid, each with a molar ratio of 1:3.The physicochemical properties of the studied NADESs are shown in Table 1.To allow for a comparison, all NADESs were prepared with the addition of the same amount of water (20%). The polarity of DESs can be determined indirectly by measuring the Kmax of the Nile red indicator in different solvents [17].More polar solvent shifts the λmax of the Nile red to a longer wavelength, thus lowering ENR.The results in Table 1 and Figure 3 show a slight variation in the polarity of the glucose-based NADESs, in the following order: GluLA ≈ GluMA > GluCA.These results suggest that the acidity of organic acids did not significantly influence the polarity of the studied NADESs. NADES Preparation In the present study, three glucose-based NADESs were evaluated for the extraction of xanthorrhizol from rhizomes of C. xanthorrhiza.In these NADES systems, glucose was combined with lactic acid, malic acid, and citric acid, each with a molar ratio of 1:3.The physicochemical properties of the studied NADESs are shown in Table 1.To allow for a comparison, all NADESs were prepared with the addition of the same amount of water (20%).The polarity of DESs can be determined indirectly by measuring the K max of the Nile red indicator in different solvents [17].More polar solvent shifts the λ max of the Nile red to a longer wavelength, thus lowering E NR .The results in Table 1 and Figure 3 show a slight variation in the polarity of the glucose-based NADESs, in the following order: GluLA ≈ GluMA > GluCA.These results suggest that the acidity of organic acids did not significantly influence the polarity of the studied NADESs.Under the same measurement conditions, the studied NADESs were highly viscous (Table 1).The viscosity of DESs originates from the hydrogen bond interaction between components.The organic acids used in this study allow for maximum interaction with glucose due to the HBAs, with LA, MA, and CA containing mono-, di-, and tri-carboxiclic acid groups, respectively.GluMA and GluCA achieved viscosity of >49.152 mPa•s.Glucose based NADESs are known to be very viscous, as reported by Mitar et al. (2019) [18].A viscous solvent may restrict mass transfer in the extraction process, which leads to lower extraction efficiency.However, high solvent viscosity allows for a stable molecular interaction [18]. Density is an important property of solvents.Generally, solvents with high density are more difficult to handle and mix in the chemical process.However, the selection of highly dense solvents can be beneficial to ensure phase separation in the extraction process.In the present study, the density of the glucose-based NADESs with 20% water (v/v) was determined, and the values range from 1.31-1.46(Table 1), which are higher than those of water and ethanol, with the following order: GluCA > GluMA > GluLA.Mitar et al. (2019) found that density is a property that shows an additive relationship among their components [18].With the same HBA (glucose), the order of density is likely due to the lengthening of the alkyl chain and the addition of the carboxylic group in the HBA of NADESs.In another study, it was also observed that density increases with the number of -OH groups present in the compounds [19], which is consistent with what was observed in this study. The combination of glucose and lactic acid (1:1) was previously reported to be able to extract curcuminoids from C. longa with good yield [10].In the present study, a different molar ratio and different dicarboxylic acids were used to allow modification of the polarity of synthesized NADESs to accommodate xanthorrhizol extraction over curcuminoids [20,21].Each NADES was prepared through heating and stirring to obtain clear eutectic mixtures.It should be noted that in the process of NADES syntheses, no chemical reaction Under the same measurement conditions, the studied NADESs were highly viscous (Table 1).The viscosity of DESs originates from the hydrogen bond interaction between components.The organic acids used in this study allow for maximum interaction with glucose due to the HBAs, with LA, MA, and CA containing mono-, di-, and tri-carboxiclic acid groups, respectively.GluMA and GluCA achieved viscosity of >49.152 mPa•s.Glucose based NADESs are known to be very viscous, as reported by Mitar et al. (2019) [18].A viscous solvent may restrict mass transfer in the extraction process, which leads to lower extraction efficiency.However, high solvent viscosity allows for a stable molecular interaction [18]. Density is an important property of solvents.Generally, solvents with high density are more difficult to handle and mix in the chemical process.However, the selection of highly dense solvents can be beneficial to ensure phase separation in the extraction process.In the present study, the density of the glucose-based NADESs with 20% water (v/v) was determined, and the values range from 1.31-1.46(Table 1), which are higher than those of water and ethanol, with the following order: GluCA > GluMA > GluLA.Mitar et al. (2019) found that density is a property that shows an additive relationship among their components [18].With the same HBA (glucose), the order of density is likely due to the lengthening of the alkyl chain and the addition of the carboxylic group in the HBA of NADESs.In another study, it was also observed that density increases with the number of -OH groups present in the compounds [19], which is consistent with what was observed in this study. The combination of glucose and lactic acid (1:1) was previously reported to be able to extract curcuminoids from C. longa with good yield [10].In the present study, a different molar ratio and different dicarboxylic acids were used to allow modification of the polarity of synthesized NADESs to accommodate xanthorrhizol extraction over curcuminoids [20,21].Each NADES was prepared through heating and stirring to obtain clear eutectic mixtures.It should be noted that in the process of NADES syntheses, no chemical reaction took place between the starting components.This indicates that their synthesis is highly efficient, produces no waste, and has a low cost. Evaluation of NADES Antioxidative Activity The antioxidative activity of NADESs is less known, and limited reports are available regarding this activity [22].It is well known that the components forming NADESs such as lactic acid, malic acid, and citric acid, have antioxidant activity.Therefore, in this study, the antioxidant activity of glucose-based NADESs with these organic acids was evaluated.DPPH and FRAP assays were employed to evaluate the antioxidative capacity of the studied NADESs. The DPPH radical-scavenging activity of the NADESs studied herein was between 50.42 and 60.69 µG AAE g −1 (Table 1).The activity was much higher than the organic solvent ethanol (6.53 µG AAE g −1 ).The antioxidant activity of NADESs is expected to be due to the components forming NADES (lactic acid, malic acid, and citric acid), which are known to have radical-scavenging activity. The FRAP assay is widely used for quantifying the antioxidant capacity of plant extracts.Ferrous ion (III) is reduced to a lower oxidation number in acidic conditions by antioxidant compounds in samples.Among the tested NADESs, GluCA exhibited the best ability to reduce Fe(III), followed by GluLA, whereas the lowest FRAP value was observed for GluMA (Table 1).This is possibly due to citric acid, which is well known in the literature to be a strong antioxidant. Evaluation of NADES Toxicity NADESs are considered safe, eco-friendly, and benign since their starting components are of natural origin and are found in living organisms.This implies cellular tolerance and low toxicity in living organisms.However, some reports showed the opposite effects of NADESs [23], which indicates that the toxicity profile of the final mixture of NADESs can be different from their individual components, possibly due to the synergistic effect between components [24].In addition, although NADESs have the advantage of being sustainable solvents, their toxicity in the environment remains to be confirmed.Studies regarding the toxicity of NADESs have been conducted to a lesser extent, using different organisms such as vertebrates, invertebrates, and cell line models [25].In the present study, the toxicity of the studied NADESs was assessed using bacteria as model organisms.A bacterial time-kill assay was employed in this study, which is advantageous over other bacterial assays such as disk test assays as the time-kill assay is simple and allows realtime analysis. To analyze the tolerance of E. coli and S. aureus to the studied NADESs and, thus, the potential toxicity of NADESs, cells were grown in LB media containing different NADESs (GluLA, GluMA, and GluCA), and the growth was monitored continuously overnight in real time.The results were compared with those of the control. The E. coli growth curves in Figure 4A show two different patterns.The control culture (E. coli without NADES) showed a bacterial growth pattern.A short lag phase (adaption period) was observed within 5 min of incubation, followed by an exponential growth phase (log phase) which took place for 8 h; thereafter, this was followed by a stationary phase.It is notable that in the present study, E. coli shows diauxic curves in which two growth curves were observed to have a short lag phase.Diauxic growth curves have been reported for E. coli and many other bacteria previously [26].In contrast, different patterns were shown by E. coli treated with GluLA, GluMA, and GluCA, in which cell growth was not detected, indicating the toxicity of GluLA, GluMA, and GluCA for E. coli. Figure 4B shows the growth curves of S. aureus in LB media containing GluLA, GluMA, and GluCA.The control culture shows the expected growth pattern with a short lag phase followed by a fast growth rate that yields high biomass, with the final OD value reaching around 1, and thereafter followed by a stationary phase.However, unlike E. coli, the treatment of GluLA and GluMA in S. aureus did not inhibit bacterial growth.In the case of GluMA, the curve shows similar growth to the control.The lag and log phases occur at a similar incubation time; however, a decreased growth rate is observed for GluMA, yielding a lower final OD of around 0.6 (40% lower than the control).A different growth profile is observed for GluLA.S. aureus experienced a prolonged lag phase of about a 6 h incubation period.Following the treatment with GluLA, the bacterium grew at a much slower rate, as can be seen in the log phase, reaching a final OD of 0.4 (60% lower than control).In the case of GluCA, treatment in S. aureus caused cell death, as also observed in E. coli. Figure 4B shows the growth curves of S. aureus in LB media containing GluLA, GluMA, and GluCA.The control culture shows the expected growth pattern with a short lag phase followed by a fast growth rate that yields high biomass, with the final OD value reaching around 1, and thereafter followed by a stationary phase.However, unlike E. coli, the treatment of GluLA and GluMA in S. aureus did not inhibit bacterial growth.In the case of GluMA, the curve shows similar growth to the control.The lag and log phases occur at a similar incubation time; however, a decreased growth rate is observed for GluMA, yielding a lower final OD of around 0.6 (40% lower than the control).A different growth profile is observed for GluLA.S. aureus experienced a prolonged lag phase of about a 6 h incubation period.Following the treatment with GluLA, the bacterium grew at a much slower rate, as can be seen in the log phase, reaching a final OD of 0.4 (60% lower than control).In the case of GluCA, treatment in S. aureus caused cell death, as also observed in E. coli. The outer membrane of Gram-negative bacteria, which is the lipopolysaccharide (LPS) layer, is generally negatively charged [25,27].The stabilization of the LPS is due to the presence of divalent cations within the LPS core, which form electrostatic cross-links with phosphate groups in the LPS core.This reduces the electrostatic repulsion between membrane groups.In the present study, 50 µL of each NADES, i.e., GluLA and GluMA (each with pH 5-6 in LB media) and GluCA (pH 3-4 in LB media), was added to 15 mL of E. coli culture in LB media.The amount of water that dominates the surroundings of these NADESs disrupts the hydrogen bond network between components in NADESs, releasing the anionic lactate, malate, and citrate.These anions may chelate the divalent metal ions, leading to the destabilization of the LPS layer.The antibacterial activity of citric acid is well known, and a previous study showed membrane damage using a scanning electron micrograph (SEM) due to low pH [27].LPS disruption may also be due to the acidification of the intracellular compartment.Small uncharged organic acids which are more lipophilic may also permeate the LPS membrane and releasing protons to the intracellular environment, collapsing the proton gradient [28]. NADES Extraction of Xanthorrhizol and Determination of the Optimal Conditions Using an Ultrasound-Assisted Extraction Process The response surface methodology (RSM) with a Box-Behnken Design (BBD) was applied in the optimization of xanthorrhizol extraction.The independent variables for RSM, which were selected based on our preliminary studies, were water content (10-30%), solid-to-liquid ratio (1/5 to 1/20 g/mL), and extraction time (10-30 min) (Supplementary Table S1).The experimental responses used in the present study were xanthorrhizol and curcuminoid contents, which were analyzed using a validated method of thin layer The outer membrane of Gram-negative bacteria, which is the lipopolysaccharide (LPS) layer, is generally negatively charged [25,27].The stabilization of the LPS is due to the presence of divalent cations within the LPS core, which form electrostatic cross-links with phosphate groups in the LPS core.This reduces the electrostatic repulsion between membrane groups.In the present study, 50 µL of each NADES, i.e., GluLA and GluMA (each with pH 5-6 in LB media) and GluCA (pH 3-4 in LB media), was added to 15 mL of E. coli culture in LB media.The amount of water that dominates the surroundings of these NADESs disrupts the hydrogen bond network between components in NADESs, releasing the anionic lactate, malate, and citrate.These anions may chelate the divalent metal ions, leading to the destabilization of the LPS layer.The antibacterial activity of citric acid is well known, and a previous study showed membrane damage using a scanning electron micrograph (SEM) due to low pH [27].LPS disruption may also be due to the acidification of the intracellular compartment.Small uncharged organic acids which are more lipophilic may also permeate the LPS membrane and releasing protons to the intracellular environment, collapsing the proton gradient [28]. NADES Extraction of Xanthorrhizol and Determination of the Optimal Conditions Using an Ultrasound-Assisted Extraction Process The response surface methodology (RSM) with a Box-Behnken Design (BBD) was applied in the optimization of xanthorrhizol extraction.The independent variables for RSM, which were selected based on our preliminary studies, were water content (10-30%), solid-to-liquid ratio (1/5 to 1/20 g/mL), and extraction time (10-30 min) (Supplementary Table S1).The experimental responses used in the present study were xanthorrhizol and curcuminoid contents, which were analyzed using a validated method of thin layer chromatography (TLC) with densitometry.The BBD and experimental values of xanthorrhizol and curcuminoid contents are listed in Table 2.The final xanthorrhizol content as a function of three independent variables (water content, solid-to-liquid ratio, and extraction time) is described by the following polynomial equation: For the determination of the response variable of curcuminoid content, the following equation was obtained: where Y is xanthorrhizol or curcuminoid content, and X 1 , X 2 , and X 3 represent the water content in GluLA, the solid-to-liquid ratio, and the extraction time, respectively.The suitability of the quadratic polynomial equations was analyzed as shown in Supplementary Table S2.It is noteworthy that for the xanthorrhizol model, the p value of the "Model" is <0.0001, the p value of the "Lack of Fit" is not significant, and the R 2 is 0.9975; meanwhile, all the linear coefficients (X 1 , X 2 , and X 3 ), quadratic coefficients X 3 2 , and partial cross coefficients (X 1 X 2 , X 1 X 3 , and X 2 X 3 ) are significant (p < 0.05), and only the quadratic coefficients X 1 2 and X 2 2 are not significant (p > 0.05).The suitability of the quadratic equations for curcuminoids can be seen in Supplementary Table S2. Based on the regression model, the predicted optimum conditions for the extraction of xanthorrhizol by ultrasound-assisted extraction with GluLA are 30% water addition to GluLA, 1/15 (g/mL), and a 20 min extraction time.Extraction using these optimal conditions obtained 17.62 ± 0.20 mg/g dried rhizome for xanthorrhizol, and 6.64 ± 0.05 mg/g dried rhizome for curcuminoids.The experimental values for the xanthorrhizol and curcuminoid contents are 21.75 ± 0.06 and 6.79 ± 0.10 mg/g, which are close to the predicted value, indicating good accuracy of the final xanthorrhizol and curcuminoid contents.These results signify the selectivity of the optimized extraction for xanthorrhizol over curcuminoids.To the best of our knowledge, this is the first time NADESs have been reported for the extraction of xanthorrhizol. Xanthorrhizol Extraction by GluLA and Comparison with Ethanol 2.5.1. Ethanol Maceration As previously mentioned, ethanol maceration is the method recommended by the Indonesian Farmakope for the extraction of xanthorrhizol from rhizomes of C. xanthorrhiza.In the case of ethanol maceration, the yields obtained were 9.14 ± 0.01 mg/g and 2.37 ± 0.01 mg/g for xanthorrhizol and curcuminoids, respectively.The comparison of extraction yields between ultrasound-assisted NADESs with the ethanol maceration method highlights the efficiency of extraction by UAE-NADESs in terms of extraction time (20 min in comparison with several days).Although GluLA was more viscous than ethanol and, thus, may limit mass transfer in the extraction process, the use of ultrasonication aided in the extraction process.The extraction efficiency of using combined UAE-NADESs was reported previously by Patil et al. (2021) [20]. Surface Morphology Analysis The structural changes in the surface morphology of C. xanthorrhiza rhizome powder were investigated using scanning electron microscopy (SEM) to reveal the effect of GluLA and ultrasonication on the raw material.Comparisons were made between untreated samples and samples soaked in ethanol. Untreated rhizome powder showed an intact and smooth surface (Figure 5(A1,A2)).The surface of the rhizome particle did not change appreciably following extraction with ethanol (Figure 5(B1,B2)).By comparison, the external rhizome surface treated with GluLA (Figure 5(C1,C2)) exhibited a different morphology.The rhizome particle showed a loose, disintegrated, and cracked surface.This structural damage can be attributed to sonoporation due to the combined use of GluLA and exposure to ultrasonication in the extraction. in comparison with several days).Although GluLA was more viscous than ethanol and, thus, may limit mass transfer in the extraction process, the use of ultrasonication aided in the extraction process.The extraction efficiency of using combined UAE-NADESs was reported previously by Patil et al. (2021) [20]. Surface Morphology Analysis The structural changes in the surface morphology of C. xanthorrhiza rhizome powder were investigated using scanning electron microscopy (SEM) to reveal the effect of GluLA and ultrasonication on the raw material.Comparisons were made between untreated samples and samples soaked in ethanol. Untreated rhizome powder showed an intact and smooth surface (Figure 5(A1,A2)).The surface of the rhizome particle did not change appreciably following extraction with ethanol (Figure 5(B1,B2)).By comparison, the external rhizome surface treated with GluLA (Figure 5(C1,C2)) exhibited a different morphology.The rhizome particle showed a loose, disintegrated, and cracked surface.This structural damage can be attributed to sonoporation due to the combined use of GluLA and exposure to ultrasonication in the extraction.Ultrasonic wave vibration in GluLA media propagates the formation of cavitation bubbles.Although viscous solvents such as NADESs may need higher energy to induce the formation of cavitation bubbles, their low-vapor-pressure characteristics produce more intense bubble collapse [29], giving rise to stronger local turbulence.The mechanical Ultrasonic wave vibration in GluLA media propagates the formation of cavitation bubbles.Although viscous solvents such as NADESs may need higher energy to induce the formation of cavitation bubbles, their low-vapor-pressure characteristics produce more intense bubble collapse [29], giving rise to stronger local turbulence.The mechanical effect of bubble implosion may aid in the disruption and disintegration of the rhizome surface to allow the mass transfer of phytocompounds to the bulk solvent. Metabolite Identification of GluLA and Ethanol Extracts An analysis of the chemical constituents in GluLA and ethanol extracts of C. xanthorrhiza was carried out in LC-MS/QTOF experiments.The chromatograms of the extracts are shown in Figure 6.Table 3 presents information on the peaks observed and putative identification of the compounds, with a mass error of ±10 ppm, which indicates good mass accuracy of the compounds identified in the mass spectra.These compounds were tentatively identified by interpreting the chemical formula of the molecular ions [M − H] + , fragmentation patterns, and the elution order.This information was compared with available data in the literature documenting previous identification of the compounds.Public mass databases such as Mass bank, NIST, PubChem, and ChemSpider were also used for comparison.Moreover, literature regarding the phytochemical profiles of the rhizomes of C. xanthorrhiza were used as indicative information for component identification [2,[30][31][32]. accuracy of the compounds identified in the mass spectra.These compounds were tentatively identified by interpreting the chemical formula of the molecular ions [M − H] + , fragmentation patterns, and the elution order.This information was compared with available data in the literature documenting previous identification of the compounds.Public mass databases such as Mass bank, NIST, PubChem, and ChemSpider were also used for comparison.Moreover, literature regarding the phytochemical profiles of the rhizomes of C. xanthorrhiza were used as indicative information for component identification [2,[30][31][32].The LC-MS/QTOF analysis of GluLA and the ethanol extracts confirms the presence of well-known compounds previously reported in the rhizome of C. xanthorrhiza.The identified compounds herein belong to different groups, such as terpenoids, diarylheptanoids, flavonoids, and phytosterols (Table 3).Generally, different chromatogram profiles were obtained for GluLA and ethanol extracts (Figure 6).The GluLA profile (Figure 6B) shows more peaks in the retention time (RT) (4.04 to 7.30) than ethanol extract (Figure 6A).Most of them are attributed to the different terpenoids.On the other hand, ethanol extract exhibits peaks in the range of RT 14 to 17.Most compounds in this range correspond to various diarylheptanoids.This discrepancy indicates the difference in polarity of GluLA and ethanol, resulting in different constituents extracted in the corresponding solvents.The LC-MS/QTOF analysis of GluLA and the ethanol extracts confirms the presence of well-known compounds previously reported in the rhizome of C. xanthorrhiza.The identified compounds herein belong to different groups, such as terpenoids, diarylheptanoids, flavonoids, and phytosterols (Table 3).Generally, different chromatogram profiles were obtained for GluLA and ethanol extracts (Figure 6).The GluLA profile (Figure 6B) shows more peaks in the retention time (RT) (4.04 to 7.30) than ethanol extract (Figure 6A).Most of them are attributed to the different terpenoids.On the other hand, ethanol extract exhibits peaks in the range of RT 14 to 17.Most compounds in this range correspond to various diarylheptanoids.This discrepancy indicates the difference in polarity of GluLA and ethanol, resulting in different constituents extracted in the corresponding solvents. DNA Damage Protection Activity of GluLA and Ethanol Extracts The double-strand cleavage of plasmid DNA by highly reactive •OH radicals generated by the Fenton reaction can mimic what occurs in a biological system.In the presence of H 2 O 2 /FeSO 4 , •OH radicals were formed due to electron transfer involving the oxidation of Fe(II) to Fe(III).These radicals induce the cleavage of supercoiled DNA (sc-DNA), resulting in an open circular (oc-DNA) and linear (lin-DNA) conformation with lower electrophoretic movement [33].The separation of these conformations can be observed in the gel electrophoreses.The DNA protection assay is well known as a biomarker to evaluate the antioxidant activity of plant extracts. A DNA protection assay was conducted at different concentrations of GluLA, GluLA extract, and ethanol extract.The results are presented in Figure 7. Lane 1 shows a normal plasmid in which the supercoiled form was predominant in the absence of H 2 O 2 and Fe(II) ions.This conformation is characterized by high electrophoretic mobility.Lane 2 shows damaged DNA due to OH• radicals from the Fenton reaction.DNA was converted into the open circular (op-DNA) and linear (lin-DNA) forms which moved slower in the electrophoretic gel.Lanes 3 to 8 show DNA treated with samples in increasing concentrations.The addition of GluLA in the range of 1.38-44.18mg/mL (lanes 3 to 8) suppressed the formation of lin-DNA and oc-DNA, and induced the partial recovery of sc-DNA, in a dose-dependent manner.The DNA-protective effect of GluLA is likely caused by the radical-scavenging activity and reducing capacity of GluLA, as shown in Table 1.In addition, the ability of LA to chelate Fe(II), thus inhibiting the Fenton reaction, may also contribute to the DNA-protective effect of GluLA.Similar to GluLA, GluLA extract showed a protection effect against DNA damage in the same concentration range as GluLA.This result indicates a contributory factor of GluLA to this activity.It should be noted, however, that ethanol extracts exerted DNA protection activity in lower concentrations (0.01-0.17 mg/mL) than GluLA and GluLA extract.Phenolics extracted from different plants have been reported for their ability to prevent breakages of plasmid DNA strands [34]. The band intensity was further analyzed by ImageJ V1.8.0.software to obtain a quantification of oc-DNA compared to sc-DNA.The results in Figure 7 confirm that GluLA is effective in protecting DNA and inhibiting strand breakage due to OH radicals.The band analysis shows that at 22.06 and 44.11 mg/mL, GluLA increased the native form of DNA by 49.51 and 59.01%, respectively.On the other hand, GluLA extract at the same concentrations retained the native DNA by 61.18 and 87.83%.Ethanol extract was able to protect DNA effectively, and at 0.02 mg/mL, retained more than 70% of the native DNA.These results highlight the antioxidative activity of the extracted bioactive compounds, including xanthorrhizol and curcuminoids. The use of antioxidants to protect DNA strands from breakage is beneficial to suppress oxidative damage, thus potentially preventing some diseases, including cancer and degenerative diseases.The findings obtained in the present study suggest a potential health benefit of C. xanthorrhiza extracts in preventing health risks posed by oxidative damage to DNA. Stability of Xanthorrhizol Extracted by GluLA and Ethanol Xanthorrhizol is susceptible to storage conditions, such as light exposure and length of storage.As per the information from the product data sheet, it was advised that xanthorrhizol standard compound be stored at −20 • C away from light.To date, no study has reported the stability of xanthorrhizol in extracts.To allow for further applications of xanthorrhizol-rich NADES extract, it is important to study the stability of the extracted xanthorrhizol in different storage conditions.The present study investigated the stability of xanthorrhizol in GluLA at −20, 4, and 25 • C over a 90-day storage period.The results are shown in Figure 6.The band intensity was further analyzed by ImageJ V1.8.0.software to obtain a quantification of oc-DNA compared to sc-DNA.The results in Figure 7 confirm that GluLA is effective in protecting DNA and inhibiting strand breakage due to OH radicals.The band analysis shows that at 22.06 and 44.11 mg/mL, GluLA increased the native form of DNA by 49.51 and 59.01%, respectively.On the other hand, GluLA extract at the same concentrations retained the native DNA by 61.18 and 87.83%.Ethanol extract was able to protect DNA effectively, and at 0.02 mg/mL, retained more than 70% of the native DNA.These results highlight the antioxidative activity of the extracted bioactive compounds, including xanthorrhizol and curcuminoids. The use of antioxidants to protect DNA strands from breakage is beneficial to suppress oxidative damage, thus potentially preventing some diseases, including cancer and degenerative diseases.The findings obtained in the present study suggest a potential health benefit of C. xanthorrhiza extracts in preventing health risks posed by oxidative damage to DNA. Stability of Xanthorrhizol Extracted by GluLA and Ethanol Xanthorrhizol is susceptible to storage conditions, such as light exposure and length of storage.As per the information from the product data sheet, it was advised that xanthorrhizol standard compound be stored at −20 °C away from light.To date, no study has reported the stability of xanthorrhizol in extracts.To allow for further applications of xanthorrhizol-rich NADES extract, it is important to study the stability of the extracted xanthorrhizol in different storage conditions.The present study investigated the stability of (A) (B) (C) Xanthorrhizol showed similar stability in GluLA and ethanol extracts at storage temperatures of −20 • C and 4 • C (Figure 8A,B).At −20 • C, the extracted xanthorrhizol in GluLA remained stable over a period of 90 days with only 4% degradation observed.Higher degradation of xanthorrhizol of 13% was observed, however, for ethanol extract at −20 • C over the same period.A storage temperature of 4 • C resulted in higher degradation compared to −20 • C, observed for both GluLA and ethanol extracts, each by 15 and 13%, respectively.Different results were obtained when the extracts were stored at 25 • C (Figure 8C).The extracted xanthorrhizol in GluLA extract was not stable at this temperature, with only 33% remaining xanthorrhizol in the GluLA extract at the end of the assay.Apparent degradation occurred after day 30.Interestingly, xanthorrhizol was stable in ethanol extract, with 96% of xanthorrhizol still retained after 90 days.Xanthorrhizol showed similar stability in GluLA and ethanol extracts at storage temperatures of −20 °C and 4 °C (Figure 8A,B).At −20 °C, the extracted xanthorrhizol in GluLA remained stable over a period of 90 days with only 4% degradation observed.Higher degradation of xanthorrhizol of 13% was observed, however, for ethanol extract at −20 °C over the same period.A storage temperature of 4 °C resulted in higher degradation compared to −20 °C, observed for both GluLA and ethanol extracts, each by 15 and 13%, respectively.Different results were obtained when the extracts were stored at 25 °C (Figure 8C).The extracted xanthorrhizol in GluLA extract was not stable at this temperature, with only 33% remaining xanthorrhizol in the GluLA extract at the end of the assay.Apparent degradation occurred after day 30.Interestingly, xanthorrhizol was stable in ethanol extract, with 96% of xanthorrhizol still retained after 90 days.The above results indicate that storage temperature plays a significant role in the stability of xanthorrhizol in the extract.It is likely that low temperature restricts the movement of xanthorrhizol molecules to inhibit their exposure to oxidative species.With regard to the extraction solvent, xanthorrhizol in GluLA showed higher stability than ethanol extract when stored at −20 • C, indicating the contributing effect of GluLA to the observed stability. This stabilization may have originated from the hydrogen bonding interaction between xanthorrhizol and GluLA, as is described in the previous section.The establishment of hydrogen bond formation could restrict the movement of xanthorrhizol molecules, thus limiting their contact with oxidative species.The π-π stacking between phenolic rings of xanthorrhizol may also hold the molecules stable. Previously, Dai et al. (2013 and2014) studied the mechanisms of stabilization of quercetin in choline chloride-xylitol.Using 2D-NMR and FT-IR spectra, it was found that the extensive hydrogen bonding interaction between solutes and NADESs greatly increase their storage stability [35,36]. FTIR and NMR Characterization of the Optimal NADESs and Interaction of NADESs with Xanthorrhizol FT-IR spectroscopy is a typical technique to identify H-bond interaction in a system [37].In NADES syntheses, the formation of H-bonding between HBD and HBA components of NADESs is the main force of intermolecular interactions between components.Therefore, FT-IR spectroscopy was applied to evaluate the evidence of NADES formation.The FT-IR spectra of GluLA and its components, i.e., glucose and lactic acid, are shown in Figure 9A.The GluLA spectrum was dominated by lactic acid functional groups.The O-H stretching vibration of the OH group of lactic acid was observed as a broad band at 3414 cm −1 .The O-H vibration in glucose appeared at a lower wavenumber (3240 cm −1 ).Band shifting in this area was noticeable after the formation of GluLA, with the O-H vibration appearing at 3358 cm −1 in GluLA.This shifting may indicate that the OH-functional groups in lactic acid and glucose take part in the formation of H-bonds in NADESs.It is known that the O-H vibration band shifting indicates an interaction between HBAs and HBDs in NADESs [14,38].The formation of hydrogen bonds alters the electron density in the bond, which changes the frequency of stretching vibration.Peaks of lactic acid at 1120 and 1211 cm −1 can be assigned to C-O-H binding vibration and C-O stretching vibration, respectively.These peaks shifted to longer wavenumbers in GluLA, to 1123 and 1218 cm −1 .On the other hand, the C=O stretching vibration of LA (at 1718 cm −1 ) did not shift after the formation of GluLA.It should also be noted that during the preparation of GluLA (1:3), 30% water was added in order reduce viscosity and enhance extraction performance.The addition of water may also contribute to the H-binding in NADESs.Vibrational band shifting can be explained by changes in the electron density of the oxygen atoms following interactions with neighboring hydrogen atoms, which may lead to a decrease in the constant force, thus changing the vibrational state. In the spectrum of GluLA (Figure 9C), peaks related to glucose (Figure 9A) and lactic acid (Figure 9B) were preserved.This indicates that no chemical reaction took place during the formation of NADESs.However, some peaks in NADESs were slightly shifted downfield (designated by arrows in the figure), for example, chemical shifts at 1.227-1.463 of NADESs from 1.353-1.484 in lactic acid, suggesting changes in the chemical environment caused by the formation of hydrogen bonding. In the 1H-1H NOESY experiment, the formation of H-bonds can be confirmed by the interaction between the proton of the -OH group at C3 in the glucose and the proton of the -COOH of lactic acid (Figure 10).The target compound xanthorrhizol can be seen interacting via the proton of the -OH functional group with the proton of the -COOH of lactic acid (Figure 10). respectively.These peaks shifted to longer wavenumbers in GluLA, to 1123 and 1218 cm −1 .On the other hand, the C=O stretching vibration of LA (at 1718 cm −1 ) did not shift after the formation of GluLA.It should also be noted that during the preparation of GluLA (1:3), 30% water was added in order reduce viscosity and enhance extraction performance.The addition of water may also contribute to the H-binding in NADESs.Vibrational band shifting can be explained by changes in the electron density of the oxygen atoms following interactions with neighboring hydrogen atoms, which may lead to a decrease in the constant force, thus changing the vibrational state.In the spectrum of GluLA (Figure 9C), peaks related to glucose (Figure 9A) and lactic acid (Figure 9B) were preserved.This indicates that no chemical reaction took place during the formation of NADESs.However, some peaks in NADESs were slightly shifted downfield (designated by arrows in the figure), for example, chemical shifts at 1.227-1.463 of NADESs from 1.353-1.484 in lactic acid, suggesting changes in the chemical environment caused by the formation of hydrogen bonding. In the 1H-1H NOESY experiment, the formation of H-bonds can be confirmed by the interaction between the proton of the -OH group at C3 in the glucose and the proton of the -COOH of lactic acid (Figure 10).The target compound xanthorrhizol can be seen Plant Material and Chemical Reagents The rhizomes of C. xanthorrhiza used in this study were obtained from the Research NADES Preparation NADESs were prepared using a heating and stirring method as described elsewhere [39].An appropriate amount of the initial components (glucose/organic acid, 1:3), as in Supplementary Table S3, was added into a beaker.The mixture was heated (70-100 • C) and stirred continuously on a hot plate until a clear transparent liquid was obtained.After the formation of NADESs, water was then added (20%, v/v, of the total NADES volume) to reduce viscosity and aid in the extraction.The liquid was further stirred for 30 min and transferred into a closed glass vessel to be placed at an ambient temperature. Physical Properties of NADESs The viscosity of different NADESs was measured using an Anton Paar ViscoQC 300 viscometer (Graz, Austria), using a DG26 spindle type, run for 30 s at 3.00 rpm.The density was determined by weighing 1 mL of each NADES using a Mettler Toledo ML303 analytical balance (Greifensee, Switzerland). The polarity of the NADESs was determined using a Nile red solvatochromic probe (9-diethylamino-5-benzo[a]phenoxazinone), as described previously [19].Nile red (20 µL, 0.01 g/mL in ethanol) was added to the NADESs (980 µL).After mixing well, the mixture was scanned in the visible region (400-750 nm) using a Libra S-22 UV-Vis spectrophotometer (Cambridge, UK).The molar transition energy was determined by the following formula: E NR is the molar transition energy in kcal/mol (results shown in Table 1 were converted into the international unit kJ/mol), whereas λ max is the wavelength of the maximum absorbance of each NADES.All measurements were conducted at ambient temperature. Xanthorrhizol Extraction The extraction of xanthorrhizol from the rhizome powder of C. xanthorrhiza was performed with NADESs and ethanol.Ultrasound-assisted extraction (UAE) was used during NADES extraction for screening to select glucose-based NADESs with the highest efficiency.In brief, rhizome powder (0.1 g) was mixed with previously prepared NADESs containing 10-30% water, with a liquid-to-solid ratio of 5-15 mL/0.1 g.Extraction was carried out in a period of 10-30 min.After extraction, the mixture was centrifuged at 2000 rpm for 12 min.The supernatant obtained was filtered and kept at 4 • C until further analysis. For comparison, ethanol (96%) maceration was conducted following a method reported in the Farmakope Herbal Indonesia (2017).Rhizome powder was macerated in ethanol (96%) at a ratio of 1/10 (g/mL) for 18 h with intermittent shaking.After filtration, ethanol was removed under reduced pressure using a Buchi R300 rotary vacuum evaporator (Flawil, Switzerland).The dried extract was kept refrigerated at 4 • C until further analysis. Extraction Optimization Using Response Surface Methodology Based on preliminary experiments, three factors that influence xanthorrhizol yield were involved, namely water content in GluLA, solid-to-liquid ratio, and ultrasonic extraction time.The response surface methodology (RSM) with a Box-Behnken Design (BBD) (Design Expert software version 13, Stat-Ease, Inc., Minneapolis, MN, USA) was applied to analyze the interactive effects of these factors.Each factor was determined on three levels, as seen in Supplementary Table S1.A total of 17 experiments were performed, as listed in Table 2.A second-order polynomial equation was applied to generate an experimental model that correlates the responses and three independent variables, i.e., water content, solid-to-liquid ratio, and extraction time, as follows: where Y is the response variable (xanthorrhizol and curcuminoid contents); β 0 , β i , β ii , and β ij represent the regression coefficients of intercept, linear, quadric, and interaction, respectively; X i and X j are independent variables; and k is a variable number (k = 3).The relationship between the independent variables and responses was examined using analysis of variance (ANOVA) with a significance level of p < 0.05, available in Design-Expert version 13. Determination of Xanthorrhizol Content by TLC Densitometric Analysis A chromatographic analysis of xanthorrhizol and curcuminoids was conducted based on a TLC method by [40].Separation was carried out on silica gel 60GF 254 plates, and was run with a solvent system of dichloromethane/chloroform (4:6).The detection and quantification of marker compounds were conducted on a CAMAG-3 TLC densitometry scanner at wavelengths of 224 and 425 nm for xanthorrhizol and curcuminoids, respectively.A representative TLC-densitometric chromatogram of NADES extract can be seen in Supplementary Figure S1. Phytochemical Analysis by UPLC-QTOF-MS A Waters ACQUITY UPLC ® H-Class System (Milford, CT, USA) was used to generate mass data on the GluLA and ethanol extracts.Chromatographic separation was performed using a Waters ACQUITY UPLC ® HSS C18 (1.8 µm 2.1 × 100 mm) (Milford, USA), with the column temperature maintained at 50 • C. Binary solvent systems were used, consisting of 5 mM ammonium formate in water (eluent A) and 0.05% formic acid in acetonitrile (eluent B).A sample volume of 5 µL was injected and eluted at a flow rate of 0.2 mL/min for a total running time of 23 min.The UPLC system was coupled with a Xevo G2-S QTof (Milford, USA) mass spectrophotometer.The collision energy of low-and high-energy functions was set at 4 and 60 eV in positive electrospray ionization (ESI+) mode, over a mass fragmentation range of 50-1200 kDa.The cone and desolvation gas flow rates were 0 and 793 L/h, respectively.Data acquisition and instrument control were conducted using Masslynx software version 4.1. DPPH Radical-Scavenging Activity Assay To test whether NADESs could be possible radical-scavenging agents, a DPPH assay was applied in accordance with a previously reported method [41].A volume of 50 µL of sample was added to methanol (50 µL).The DPPH methanolic solution (0.6 mM, 80 µL) was then added into each well.The mixture was left to stand in the dark at ambient temperature for 30 min.The absorbance was read at 515 nm by a Bio-Rad iMark microplate reader (Hercules, CA, USA).Methanol was used as a control.A calibration curve was generated using ascorbic acid as a reference solution (3.13-100 µg/mL).The obtained linear regression equation (y = 0.0085x + 0.9992, R 2 = 0.9729) was used for the calculation of radical-scavenging activity, expressed as the ascorbic acid equivalent (µG AAE/mg sample). DNA Protection Assay The ability of the optimum NADESs and NADES extract to protect DNA against oxidative damage was evaluated using a DNA protection assay, as reported previously, with some modifications [43].The pBR322 plasmid DNA (BioLabs, Ipswich, MA, USA) was used as a model, and the •OH free radicals were generated by a Fenton reaction.The reaction mixture (17 µL) contained pBR322 plasmid DNA (5 µL, 5 µG), extracts of different concentrations (5 µL, 1.38-44.18mg/mL), phosphate-buffered saline (3 µL, 10 mM, pH 7.4), FeSO4 (2 µL, 1 mM), and H 2 O 2 (2 µL, 1 mM).Following incubation for 30 min at 37 • C, the reaction was terminated by the addition of bromophenol blue buffer dye (Geneaid, New Taipei City, Taiwan), containing bromophenol blue (0.05%), glycerol (50%, v/v), and EDTA (40 mM).The mixture was then loaded onto 0.85% agarose in Tris/acetate/EDTA gel buffer, to which we previously added GelRed staining dye (Biotium, Fremont, CA, USA).Electrophoreses was run for 60 min at 60 V, and was then visualized under UV light and photographed using a C280 GelDoc Azure biosystem.The results were compared with those obtained for ethanol extract. Real-Time Bacterial Growth Determination NADESs were studied for their toxicity profile against Escherichia coli ATCC 33218 and Staphylococcus aureus ATCC 25923 by measuring real-time bacterial growth according to a reported method by Torregrosa-Crespo et al. (2020) with some modifications [26].E. coli and S. aureus were grown in Luria-Bertani (LB) medium overnight at 37 • C in a shaking incubator at 300 rpm.On the following day, the overnight culture (500 µL) was placed in a 50 mL corning tube, together with the LB media (15 mL) and NADES sample (50 µL).The tube was placed in a personal reactor (BioSan RTS 001/001C, Riga, Latvia) and run at 37 • C at 500 rpm.The bacterial growth was monitored by recording the optical density at 850 nm at an interval of 15 min over periods of 15 and 22 h, for E. coli and S. aureus, respectively. FTIR Spectroscopy Analysis The FTIR spectra of glucose-based NADESs were obtained using an Agilent Cary 630 FTIR spectrophotometer (Santa Clara, CA, USA).The measurements were carried out in attenuated total reflectance (ATR) mode, in the range of 4000-650 cm −1 , at a 4 cm −1 spectral resolution, and with the accumulation of 16 scans. NMR Spectroscopy Analysis 1D and 2D (1H-NMR and 1H-1H NOESY) NMR spectra were obtained for the optimal NADESs and their individual components.The measurement was conducted on a 500 MHz Bruker Avance III 400 (Bruker, Billerica, MA, USA).Chemical shifts were referenced against TMS as an external standard (δ in ppm).NADESs and their starting compounds were prepared in DMSO-d6. Figure 2 . Figure 2. NADES components used in the present study. Figure 2 . Figure 2. NADES components used in the present study. Figure 3 . Figure 3.The UV−Vis spectra of the Nile red solvatochromic probe in different NADESs. Figure 3 . Figure 3.The UV-Vis spectra of the Nile red solvatochromic probe in different NADESs. Molecules 2024 , 29, x FOR PEER REVIEW 12 of 20 xanthorrhizol in GluLA at −20, 4, and 25 °C over a 90-day storage period.The results are shown in Figure 6. Figure 8 . Figure 8. Stability of the extracted xanthorrhizol in glucose/lactic acid (1:3) and ethanol over 90 days of storage at (A) −20, (B) 4, and (C) 25 °C.Ct is the concentration of xanthorrhizol at t hours, whereas C0 is the concentration of xanthorrhizol at the start of the experiment.The above results indicate that storage temperature plays a significant role in the stability of xanthorrhizol in the extract.It is likely that low temperature restricts the movement of xanthorrhizol molecules to inhibit their exposure to oxidative species.With regard to the extraction solvent, xanthorrhizol in GluLA showed higher stability than ethanol extract Figure 8 . Figure 8. Stability of the extracted xanthorrhizol in glucose/lactic acid (1:3) and ethanol over 90 days of storage at (A) −20, (B) 4, and (C) 25 • C. Ct is the concentration of xanthorrhizol at t hours, whereas C0 is the concentration of xanthorrhizol at the start of the experiment. funding acquisition, A.M. and R.A.N.All authors have read and agreed to the published version of the manuscript.Funding: This research was funded by The Ministry of Research and Technology and Higher Education of the Republic of Indonesia for PKN/PK 2023 (grant number: NKB-1084/UN2.RST/HKP.05.00/2023), who provided funding for the research materials and the APC, and the Institute for Research and Community Service (Lembaga Penelitian dan Pengabdian pada Masyarakat-LPPM) of Krida Wacana Christian University (grant number 02/UKKW/LPPM-FKIK/LIT/05/2023), who provided funding for the HPLC experiments.Institutional Review Board Statement: Not applicable. Table 1 . Physicochemical properties of the studied NADESs. Table 1 . Physicochemical properties of the studied NADESs. Table 2 . RSM of independent factors (X 1 , X 2 , and X 3 ) for UAE and experimental results for xanthorrhizol and curcuminoid contents (mg/g dried rhizome). Table 3 . Phytochemicals identified in GluLA and ethanol extracts of Curcuma xanthorrhiza by LC-MS/QTOF.Data were obtained in ESI-positive mode.
12,332.6
2024-05-01T00:00:00.000
[ "Environmental Science", "Chemistry", "Medicine" ]
Image Guided Intraoperative Radiation Therapy After Surgical Resection of Brain Metastases: A First In-Human Feasibility Report Purpose A correct placement of the applicator during intraoperative radiation therapy for brain metastasis is of paramount importance, to deliver a precise and safe treatment. The applicator-to-surface contact assessment cannot be performed under direct observation because the applicator itself limits the visual range. No image guided verification is currently performed intracranially. We hypothesize that image guided intraoperative radiation therapy would assure a more precise delivery in the target area. We describe our workflow in a first in-human experience. Methods and Materials Phantom-based measurements were performed to reach the best cone beam computed tomography imaging quality possible. Once defined, a clinical feasibility study was initiated. An in-room cone beam computed tomography device is used to acquire intraoperative images after placing the applicator. Repositioning the applicator is thereafter discussed with the surgeon, according to the imaging outcomes, if required. Results An optimal image quality was achieved with 120-kV voltage, 20-mA current, and a tube current time product of 150 mAs. An additional 0.51 mSv patient exposure was calculated for the entire procedure. The wide dynamic range (−600 HU to +600 HU) of cone beam computed tomography and a 27 HU mean computed tomography values difference between brain tissue and spherical applicator allows distinguishing both structures. In this first in-human experience, the applicator was repositioned after evidencing air gaps, assuring full applicator-to-surface contact. Conclusions This first in-human procedure confirmed the feasibility of kilovoltage image guided intraoperative radiation therapy in a neurosurgical setting. A prospective study has been initiated and will provide further dosimetric details. Introduction Intraoperative radiation therapy (IORT) presents an alternative or complementary treatment modality for various indications.In the setting of adjuvant irradiation of brain metastases, a spherical applicator is inserted into the surgical cavity and nominal 50-kV x-rays will be delivered to the surface and in limited depth. 1 Depending on the cavity volume, this single-fraction irradiation allows omitting the standard external-beam treatment, thus reducing hospital visits. 2,3A sharp dose attenuation counts among the features of kilovoltage (kV) irradiation, according to the distance-squared law (equation 1.0). 4 Because in the kilovoltage spectrum the average range of secondary electrons is rather short, the maximum dose is mostly delivered at the surface.The consequent gradient yields a lower exposure of healthy brain and other surrounding organs at risk. 3,5r ð Þ ¼ I r 0 ð Þ=r 2 ð1:0Þ A drawback in IORT is the lack of image-based verification.Positioning the applicator correctly is essential to allow precise dose calculations.This is of utmost importance because the generated x-rays are sharply attenuated over only a short distance range.In practice, the dose is prescribed to the applicator surface ( §0.0 mm).This means that neither air gaps nor tissue heterogeneities (eg, hemostatic patches) between the applicator and resection cavity should be present.These could lead to incomplete dose delivery, which are especially relevant in a singleshot procedure, 6,7 potentially increasing the risk of local recurrence. 6,8Therefore, without intraoperative imaging, a misplacement cannot be disregarded. 7,9ith the implementation of in-room surgical imaging systems, image guided positioning of the spherical applicator has become possible, improving the precision and, thus, therapeutic range of IORT.In addition, this augments the evaluability range of anatomic structures and foreign objects with high x-ray attenuation (eg, bones or surgical instruments), so that possible artifacts are limited and do not hamper the imaging quality.Hence, controlling these factors could minimize any dosimetry-related recurrence risks. 7,9thods and Materials Implementation of the dosimetric feasibility study In preparation for the study, 2 simulation procedures were performed at different time points, mimicking an actual surgical scenario, including an interdisciplinary team of neurosurgery, radiation oncology, and anesthesiology. The focus of the feasibility study was to determine the required image quality of the O-Arm cone beam computed tomography (CBCT; Medtronic Inc) images.The challenge was to ensure the best possible image quality while simultaneously using the INTRABEAM kV-IORT device (Carl Zeiss Meditec AG) and the neurosurgical MRI-navigation StealthStation (Medtronic, Inc).This requires a carbon head mount to avoid artifacts in the CBCT image.Figure 1 shows the compatibility set-up of the in-room CBCT and kV-IORT devices. The neuronavigation stereotactic reference is attached to an additional frame (Fig. 2) on the operating table, distinct from the head support, due to an attachment incompatibility with the carbon head mount.This additional frame allows the patient, the kV-IORT device and the navigation reference to fit within the CBCT bore range (Fig. 2).Noteworthy, the additional frame and navigation reference star include metal parts in their structure; nonetheless, these are far enough for not interfering in the acquisition field.Figure 3 shows the operating room setup, which includes the CBCT, its mobile display station, kV-IORT device, neurosurgical navigation system, operating table, and anesthesia equipment.Additionally, the CBCT will be placed in position before starting surgery, to avoid any shifting that could potentially endanger the patient.For ensuring an adequate imaging quality, all surgical metallic elements are removed from the scan field. To determine the best possible image quality, CBCTs in different imaging modes were acquired with the "cheese phantom" (Gammex Inc; Fig. 4).The images obtained with it are used to generate the imaging valueto-density table/calibration curve (IVDT).Twelve inserts with different density levels ranging from 0.300 g/cm 3 to 1.842 g/cm 3 are installed in the phantom and HU (Hounsfield units) values are assigned to each of them.Two inserts have cylindrical cavities with different diameters for assessing spatial resolution.The IVDT is created using image data from the Picture Archiving and Communication System. Implementation of the study in the clinical setting IORT of brain metastases was performed as described in previous studies. 2,3,10,11,12Study patients are educated regarding the additional effective dose of intraoperative imaging as part of the study.Before IORT is performed, a low-dose fluoroscopy imaging for positioning is acquired.This image ensures that the head is located centrally within the gantry.The table and CBCT position are saved so that the surgeon can adjust the table as needed during surgery.The CBCT acquisition takes place after the brain metastasis has been resected and the spherical applicator has been placed in the cavity.As per the surgeon's criterion, positioning corrections can be considered if the applicator is not completely in contact with the surgical bed or the applicator diameter prevents it.If that is the case, the applicator will be repositioned.A second CBCT is performed after repositioning for confirmation. Results A wide dynamic range of Hounsfield units values are necessary for good differentiation and detectability while maintaining low radiation exposure for the patient.Because the effective dose of the "HD3D large" imaging mode (120 kV, 20 mA, 150 mAs) is 0.51 mSv and due to the large dynamic range in HU values (+600HU to −600HU) of the HD3D, this imaging mode was selected for the study. For distinguishing the spherical applicator from the surrounding tissue (brain density = 1.053 g/cm 3 ), its values were obtained directly from the DICOM set and inserted in the IVDT of the HD3D imaging mode.The spherical applicator is made of polyetherimide (ULTEM, Polytron GmbH) and has a density of 1.27 to 1.51 g/ cm 3 . 13With the HU value determined in the Picture Archiving and Communication System and the HD3D (large)-CBCT, the obtained density was 25 HU, and the mean brain HU-value was À2 HU, allowing a good differentiation profile between both structures (Fig. 5). Figure 5 shows the resulting images of the "HD3D large" mode.Figure 5A shows the first positioning attempt, and Fig. 5B the correction after an air gap was identified between the applicator and the surgical cavity.Figure 5C depicts the differences among dose-distribution profiles when in contact or with a 2-mm air gap in between, such as in this case.The cavity was irradiated with 30 Gy prescribed to the Discussion This first in-human experience has evidenced that our proposed workflow is feasible and potentially meaningful for patients.This first attempt required repositioning the applicator toward the resection cavity to minimize the gap with the target tissue (Fig. 5).With additional experience, the entire procedure duration (considering irradiation time) at our institution diminished from 45 minutes to approximately 30 minutes in subsequent patients.Of note, the total time required depends highly on the irradiation time, which depends likewise on the dose (usually 20-30 Gy) and applicator diameter.The additional effective radiation dose for the patient is 0.51 mSv per CBCT.According to our procedure, a maximum of 2 CBCTs per patient are performed in case of correction.The area of interest of the CBCT also represents the irradiation area; therefore, the dose attributable to it could be considered negligible compared with the applied IORT dose (approximately 0.05% ratio).The imaging mode is sufficient for dimensional differentiation between brain and applicator due to the applicator density being markedly greater than that of the surrounding soft tissue. These outcomes provide new insights for IG-IORT.Nevertheless, certain challenges arise.Live in-room planning is currently not available, as the imaging quality does not allow accurate Monte Carlo calculations with the Radiance system (Radiance, GMV SA).Therefore, only water-based dose estimations can be considered currently.Nevertheless, our team is working on a solution to enable assessing differences pre-and postcorrection.An ongoing prospective study will help elucidate the actual clinic and dosimetric role of IG-IORT in the neurosurgical setting. Conclusion IG-IORT in neurosurgery proved a feasible and practical in this first in-human experience, allowing a more precise positioning assessment before dose delivery.An ongoing prospective study has been initiated and will provide further dosimetric details regarding applicator repositioning. Disclosures Gustavo R. Sarria reports no personal fees and travel expenses from Carl Zeiss Meditec AG, not related to this work; speakers funding from Buro Carl Zeiss Meditec AG.Molina Grimmer reports travel expenses from Carl Zeiss Meditec AG, not related to this work.Hartmut Vatter reports travel expenses from Carl Zeiss Meditec AG, not related to this work. Figure 2 Figure 2 Setup for operating in surgery room.On display are (A) the kilovoltage intraoperative radiation therapy device, (B) cone beam computed tomography mobile display station, (C) the surgical microscope, and (D) the neuronavigation camera and screen. Figure 1 Figure 1 Simultaneous coupling of the in-room cone beam computed tomography kilovoltage intraoperative radiation therapy device. Figure 3 Figure 3 Carbon head mount and neuronavigation stereotactic reference attached to an additional mount.Placement of the patient in the cone beam computed tomography bore. Figure 5 ( Figure 5 (A) cone beam computed tomography image after positioning the spherical applicator.A 3.5-cm diameter applicator was selected for this case.A 2-mm air gap can be observed between the applicator and brain tissue.(B) Cone beam computed tomography image after repositioning.(C) The blue line depicts a regular dose absorption pattern in case of perfect applicator-surface contact.The green line shows a dose-delivery pattern after a 2-mm air gap, delivering 75% of the prescription dose to the surface.
2,486.4
2024-02-05T00:00:00.000
[ "Medicine", "Engineering" ]
Integrin α1 subunit is up-regulated in colorectal cancer Background Colorectal cancer remains one of the leading causes of death from cancer in industrialized countries. Integrins are a family of heterodimeric glycoproteins involved in bidirectional cell signaling and participate in the regulation of cell shape, adhesion, migration, differentiation, gene transcription, survival and proliferation. The α1 subunit is known to be involved in RAS/ERK proliferative pathway activation and plays an important role in mammary carcinoma cell proliferation and migration. In the small intestine, α1 is present in the crypt proliferative compartment and absent in the villus, but nothing is known about its expression in the colon mucosa, or in colorectal cancer. Results In the present study, we demonstrated that in the colon mucosa, α1 is present in the basolateral domain of the proliferative cells of the crypt, and in the surrounding myofibroblasts. We found higher levels of α1 mRNA in 86% of tumours compared to their corresponding matched margin tissues. Immunohistochemical analysis showed that α1 staining was moderate to high in 65% of tumour cells and 97% of the reactive cells surrounding the tumour cells vs 23% of normal epithelial cells. Conclusion Our findings suggest an active role for the α1β1 integrin in colorectal cancer progression. Background Colorectal cancer (CRC) is a major public health concern in industrialized countries and remains one of the leading causes of death from cancer. Its development and progression are complex events involving many factors leading to altered expression of genes and their products. Integrins are a family of cell surface αβ heterodimeric transmembrane receptors for extracellular matrix components and cell-cell interactions. These receptors play a crucial role in mediating cell signaling in response to the extracellular environment by participating in the regulation of cell shape, adhesion, migration, differentiation, gene transcription, survival and proliferation [1][2][3]. In this context it is not surprising to have identified integrin involvement in cancer progression. Indeed, over-expression of the αvβ3, α5β1, αvβ5 and α6β4 integrins in various cancer types and correlation with the metastatic behaviour of breast, prostate and lung cancers as well as melanomas are well documented [4]. Altered expression of integrins has also been reported in CRC [5,6]. For example, the integrin α9β1 was detected in 50% of the tumours [7] while expression of the pro-apoptotic α8β1 integrin was found to be down-regulated in CRC and the pro-proliferative variant form of the integrin α6β4 was exclusively found in CRC cells [8,9]. Other integrins could also be involved in CRC. To date, 18 α subunits and 8 β subunits are known to form 24 different non-covalently linked heterodimers [10]. Indeed, nothing is known about integrin α1β1 expression in CRC. The integrin α1 subunit is predominantly present in stromal and smooth muscle cells and fibroblasts and is generally absent from normal epithelia although it has been reported to be expressed in developing organs such as the kidney and skin [11]. In the human intestine, the α1 subunit has been found to be expressed in myofibroblasts and muscle cells as well as in a subregion of the epithelial lining, being restricted to the proliferative epithelial cell population located in the lower part of the glands [12,13]. Interestingly, this apparent correlation between α1β1 expression and the proliferative status of the cells appears to be consistent with a pro-proliferative role for α1β1 signaling involving the transmembrane caveolin-1, adaptor protein Shc and activation of the downstream RAS/ERK proliferative pathway [11,14]. These observations suggest that integrin α1β1 may be involved in CRC. In this study, as a first step to test this hypothesis, we investigated α1 integrin subunit expression in a set of colorectal adenocarcinoma specimens. Results and discussion In the digestive system, α1 integrin subunit expression in epithelia has only been reported in the small intestine and found to be confined to the lower crypts which contain the progenitor cells [12,13]. The integrin α1 subunit's β partner, β1, has been observed throughout the crypt-villus axis [12]. In the present study we confirmed using two distinct antibodies that, as seen in the small intestine, the α1 subunit was confined to crypt cells and was below the detection level in the differentiated epithelial cells of both the upper gland and surface epithelium of the normal colon (see * in Figure 1C, G and Figure 2A). In the lower half of the glands α1 expression was typically restricted to the basolateral domain of the epithelial cells and was also strongly expressed in the myofibroblasts [15] surrounding the crypts ( Figure 1E, G, K and Figure 2B). Expression of α1 in crypt cells suggests a possible role in cell proliferation, as has been reported in mouse breast cancer [16]. As in normal epithelial cells, the α1 integrin subunit in cancers was localized at the basolateral domain of the tumour cells ( Figure 1D, F, H). Indirect immunofluorescence on frozen tissue sections clearly confirmed the presence of the integrin α1 subunit in normal and tumour cells where an anti-laminin, a specific basement membrane marker [6], was used to delineate the epithelial staining from the strong mesenchymal signal ( Figure 2). In CRC, α1 was also localized at the basolateral domain of tumour epithelial cells (E) as well as in the adjacent subepithelial myofibroblasts (MF) (Panels C, D). Basement membrane (arrows) located at the interface between the two tissues was stained with an anti-laminin antibody (green staining). Nuclei were stained with DAPI (blue). Scale bar = 50 μm. (See figure on previous page.) Figure 1 Representative immunohistochemical images showing expression of the α1 integrin subunit in CRC (B, D, F, H, L) and corresponding matched resection margins (A, C, E, G, K). The α1 subunit was found to be expressed at higher levels in a significant number of CRC tumours (D, F, H) compared to their corresponding matched normal tissues (C, E, G) where it was found to be predominantly expressed in the proliferative cells of the crypt and below detection levels in normal surface epithelial cells (C, G). Note that the subepithelial myofibroblasts were also stained for α1 in both normal tissues (A, E, G) and tumours (B, D, F, H). Scores: The margin in A and the tumour in B were both scored 0 (negative) whereas the tumour in D was scored 2 (strong) compared to score 0 (weak) for the matched margin C. The tumour in F was scored 1 (moderate) as was its corresponding margin in E (moderate). The margin in G was scored 0 and the matched tumour in H was scored 2. To validate the specificity of the primary goat anti-α1 antibody, adjacent sections of the same normal (I, K) and cancer (J, L) specimens were stained using 5 μg/ml of non-immune IgG (I, J) or anti-α1 IgG (K, L) as primary antibody. Scale bars = 50 μm. Figure 1, α1 immunostaining was scored as 0: negative or weak, 1: moderate or 2: strong staining. Results show that α1 staining in carcinoma vs normal epithelial cells from the matched controls was significantly higher (McNemar-Bowker's test, p < 0.001) in 37 specimens (57%, gray area), similar in 25 specimens (38%) and lower in 3 specimens (5%). (E) The relative α1 integrin subunit expression was classified as negative/weak or moderate/strong. The results show that only 23% of the normal tissues displayed moderate/strong epithelial staining for α1 compared to 65% of cancer cells and 97% of the peri-tumoral stromal cells. Bars represent 95% confidence level. Analysis of the same set of matched samples at the transcript level revealed that the mRNA levels of the α1 integrin subunit were significantly increased (from 2 up to 30 times) in 86.2% of the 65 adenocarcinomas studied when compared to their matched resection margins ( Figure 3A and B). Similar increases were seen in all four stages studied ( Figure 3C). However, based on the observations described above, the observed increase in α1 mRNA levels included both the tumoral and peri-tumoral tissues. Indeed, by immunohistochemistry, when considering relative protein expression only in epithelial cells, 57.0% of the tumours displayed higher expression of the α1 subunit than their matched resection control tissues ( Figure 3D). In fact, relative expression analysis showed that only 23% of the control specimens displayed moderate/strong expression in the epithelium compared to 65% of the tumour specimens and 97% of the peri-tumoral stromal tissue ( Figure 3E). These results emphasize that integrin α1 subunit expression is increased in a significant proportion of both tumoral and peri-tumoral colonic tissues, a phenomenon that appears to account for the strong expression of this molecule at the transcript level. The fact that its expression is increased in 57% of the tumours relative to their matched resection margins makes the integrin α1 subunit a marker of interest in the context where the expression of other integrin subunits were found to be altered in comparable proportions such as α9 [7] and β4 [9]. Functionally, integrin α1β1 has been reported to participate in cell invasion in the hepatocarcinoma model [17] and to regulate invasion by enhancing proteinase expression in a mouse mammary carcinoma cell line [16]. In vitro adhesion studies reported that integrin α1 blocking antibodies reduced peritoneal gastric cell invasion [18] while in the colorectal cancer HT-8/S11 cell line, α1, but not α2 or β1 clustering induced the recruitment of the FAK/Src signaling complex involved in cell invasion [19]. It has also been reported that angiogenesis was reduced in integrin α1-null mice [20]. Moreover, loss of the integrin α1 subunit has been found to decrease the incidence and growth of lung epithelial tumours initiated by oncogenic Kras [21] consistent with the fact that Ras is a downstream effector of the α1β1 integrin [14] and that oncogenic changes in the Kras gene alone are not sufficient to confer a malignant phenotype [22]. Kras mutation is well known in colorectal cancer resistance to Cetuximab [23,24] but, to date, the link with α1 in CRC is not known. On the other hand, it has recently been reported that cancer associated stromal cells have a pro-inflammatory gene signature [25] and promote cancer cell invasiveness [26,27]. Another study reported that fibroblasts could drive tumour mammary carcinoma progression by modulating biochemical forces through β1 integrin signalling [28]. The data presented herein showing up-regulation of the integrin α1 subunit in the stromal compartment of colorectal tumours may also suggest a cooperative role of the integrin α1β1 in colon cancer progression. Conclusions In conclusion, the data presented in this study identified the expression and predominant localization of the α1 integrin subunit in the proliferative compartment of the normal colonic epithelium and demonstrated that α1 expression was significantly up-regulated in CRC in both tumour cells and surrounding stromal cells, suggesting a positive role for the α1β1 integrin in CRC progression. Patients, tumour tissues and tissue microarrays Primary colorectal adenocarcinomas and paired margin tissues were obtained from 65 patients undergoing surgical resection without prior neoadjuvant therapy. Tissues were obtained after patient's written informed consent, according to a protocol approved by the Institutional Human Subject Review Board of the Centre Hospitalier Universitaire de Sherbrooke. Staging of the adenocarcinomas was according to the TNM classification of tumours. There were 8 stage 1, 23 stage 2, 26 stage 3 and 8 stage 4 specimens. For immunohistochemistry, samples were fixed with 4% paraformaldehyde in 0.1 M PBS at 4°C overnight, dehydrated in graded alcohols, and then embedded in paraffin. For cryosections, tissues were embedded as previously described [9,12]. Total RNA was extracted from tissues using the Totally RNA kit (Invitrogen, Burlington, ON) and processed according to the manufacturer's instructions [8,9]. Tissue microarrays (TMA) were performed as previously described [29]. Briefly, 5μm thick serial sections were processed for routine hematoxylin and eosin staining, in order to hallmark the tissue region for TMA. Tissue cores with a diameter of 2 mm were removed from fixed paraffin-embedded tissue blocks using a 2 mm dermatological biopsy punch (Miltex Inc. York, PA) and arrayed in a paraffin mold which was first covered with double-sided adhesive to hold the cores in the correct position. Once all cores were deposited at the bottom of the mold, hot paraffin was poured to fill the mold and create a new block after incubation for one hour at 4°C. From the new block, sections of 5μm in thickness were made. Each section was spread on a glass slide and stored at room temperature. Immunohistochemistry and expression analysis Sections (5 μm thick) cut from paraffin-embedded TMA were mounted on charged slides, deparaffinated in xylene and rehydrated in graded alcohol. Antigen retrieval was performed in 0.01M citrate buffer, pH 6, in a microwave pressure cooker for 30 minutes. Slides were cooled to room temperature before reacting with a peroxidase blocking reagent (0.3% H 2 O 2 ) for 30 minutes, a streptavidin/biotin blocking reagent (Vector Laboratories Inc, Burlington, ON) for 15 min, and blocking serum [PBS 1×, 0.1% BSA (Sigma-Aldrich, Oakville, ON), 0.2% Triton X-100 (ICN Biochemicals, Aurora, OH), 0.1% donkey serum, 0.1% goat serum] for 30 minutes. Sections were incubated overnight at 4°C with anti-human integrin α1 purified polyclonal sheep IgG (5 μg/ml, AF5676, R & D Systems, Minneapolis, MN) or with equal amounts of sheep non-immune IgG (sc-2717, Santa Cruz Biotechnology, Santa Cruz, CA) as negative control, followed by incubation with an anti-sheep biotinylated secondary antibody (Vector Laboratories) for one hour at room temperature. Then, tissues were incubated with a streptavidin HRP conjugated solution (1:1000, Millipore, Billerica, MA) for one hour and the colour developed with 3,3 0 -diaminobenzidine (Vector Laboratories) in a buffered substrate solution. Slides were counterstained with light hematoxylin, dehydrated and cover-slipped. Representative images were acquired using a Leika DM-RXA microscope. Protein expression in different cell types, including epithelial tumour cells, epithelial normal cells and reactive cells, was separated into 2 groups based on staining intensity: negative/ low or moderate/strong expression. Expression in tumour cells was compared to the normal epithelial cells of the respective margin and scored as 0; no or weak staining, 1; moderate, and 2; strong staining. Indirect immunofluorescence To determine the α1 expression pattern, 3 μm thick sections were cut from different normal colonic mucosa and adenocarcinomas samples. First, sections were fixed 10 min in ethanol at -20°C and then washed 3 times with chilled PBS. Then, nonspecific protein-protein interactions were blocked for 30 minutes with 10% blotto followed by 2 hours incubation with the primary α1 mouse monoclonal antibodies TS27 (Endogen, Woburn, MA) diluted 1:10, and an anti-laminin rabbit antibody (Serotec, Raleigh, NC) diluted 1:1000 in 10% blotto. After three washes with ice-cold PBS, slides were incubated one hour at room temperature with AlexaFluor 488 and AlexaFluor 594 conjugated secondary antibodies directed against mouse and rabbit IgG (Molecular Probe, Burlington, ON). Slides were then stained with DAPI (4 0 ,6diamidino-2-phenylindole, 2%) and then mounted in glycerol: PBS (9:1) containing 0.1% paraphenylenediamine and observed with a Leica DM-RXA microscope. Images were acquired and composites generated with the MetaMorph Imaging System (Universal Imaging, West Chester, PA). Statistical analysis The One-Sample Student's t-test was used to determine the statistical significance of mRNA expression analyses. For immunohistological analyses, the McNemar-Bowker's test was used to compare scores between tumours and nonmalignant samples. Competing interests The authors declare that they have no competing interests. Authors' contributions SB carried out the experiments, participated in the analysis and interpretation of the data and has been involved in the drafting of the manuscript. JCC participated in the acquisition of the data and to the design of the study. JFB conceived of the study, participated in the interpretation of the data and in the preparation of the manuscript. All authors read and approved the final manuscript.
3,617.6
2013-03-07T00:00:00.000
[ "Biology", "Medicine" ]
A High Precision Pipeline for Financial Knowledge Graph Construction Motivated by applications such as question answering, fact checking, and data integration, there is significant interest in constructing knowledge graphs by extracting information from unstructured information sources, particularly text documents. Knowledge graphs have emerged as a standard for structured knowledge representation, whereby entities and their inter-relations are represented and conveniently stored as (subject,predicate,object) triples in a graph that can be used to power various downstream applications. The proliferation of financial news sources reporting on companies, markets, currencies, and stocks presents an opportunity for extracting valuable knowledge about this crucial domain. In this paper, we focus on constructing a knowledge graph automatically by information extraction from a large corpus of financial news articles. For that purpose, we develop a high precision knowledge extraction pipeline tailored for the financial domain. This pipeline combines multiple information extraction techniques with a financial dictionary that we built, all working together to produce over 342,000 compact extractions from over 288,000 financial news articles, with a precision of 78% at the top-100 extractions.The extracted triples are stored in a knowledge graph making them readily available for use in downstream applications. Introduction Knowledge graphs (KG) have lately emerged as a de facto standard for knowledge representation in the Semantic Web, whereby knowledge is expressed as a collection of "facts", represented in the form (subject, predicate, object) (SPO) triples, where subject and object are entities and predicate is a relation between those entities. This collection can be conveniently stored, queried, and maintained as a graph, with the entities modeled as vertices and relations as links or directed edges. Driven by applications such as question answering, fact checking, information search, data integration and recommender systems, there is tremendous interest in extracting high quality knowledge graphs by tapping various data sources (Ji et al., 2020;Noy et al., 2019;Dong et al., 2014;Shortliffe, 2012;Lehmann et al., 2015). Over the years, a number of cross domain KGs have been created including DBpedia (Lehmann et al., 2015), YAGO (Suchanek et al., 2007), Freebase (Bollacker et al., 2008) and NELL (Mitchell et al., 2018), covering millions of real world entities and thousands of relations, across different domains. Recently, there has been a growing interest in generating domain targeted structured representations of financial and business entities and how they are related to each other. Crunchbase 1 curated a knowledge base (KB) through partnerships with companies and data experts covering 100,000+ business entities including companies and investors, but covering only a few types of business transactions (i.e., relations) such as acquisitions and funding rounds. The work by (Benetka et al., 2017) attempted to address this limitation by developing a pipeline to populate a KB semi-automatically with quintuples of the form (subject, predicate, object, monetary value, date) extracted from a news corpus. However, this pipeline only extracted 496 quintuples covering 316 economic events that fall into one of two categories: events that increment the value of agent's resources (e.g., acquire, collect) or decrement it (e.g., pay, sell). Our goal is to automatically extract high precision structured representations from thousands of financial news articles covering a broader range of financial entities such as markets, stocks, persons (e.g., CEOs, presidents, etc), currencies, and governments. Further, storing them in a KG can facilitate answering interesting and complex queries such as (1) company acquisitions by German drugmakers, (2) US-China trade in terms of exports, (3) companies suing each other on patent grounds, etc. Knowledge extraction from unstructured sources is one of the major challenges facing industry scale KGs (Noy et al., 2019). Traditional approaches to KG construction from text rely on a pre-specified ontology of relations and large amounts of human annotated training data to learn extraction models for each relation. This limits their scalability and applicability to new relation types. Open Information Extraction (OpenIE) (Banko et al., 2007) aims to overcome these limitations by extracting all semantic relational tuples in raw surface form with little or no human supervision. Closely related to OpenIE is Semantic Role Labeling (SRL) which aims at detecting argument structures associated with verb predicates, as well as labeling their semantic roles, thus overcoming situations where the verb tense and conjugation change the role of the argument in the sentence (i.e., whether the argument is an agent which carries out the predicate action or a theme which receives the predicate action). Semantic roles make it possible to impose structural and semantic constraints on entity types to ensure high quality knowledge extraction. Besides, knowing the semantic roles of arguments can improve the effectiveness of question answering. In this work, we develop a high precision knowledge extraction pipeline tailored to the financial news domain by combining SRL information extraction for verb predicates with typed patterns for noun mediated relations. This pipeline filters noisy predicate-argument structures via a dictionary of semantically and structurally constrained sense-disambiguated financial predicates. In order to maximize the utility of the extractions for downstream tasks, our pipeline produces compact extractions via dictionary-guided minimization of overly-specific arguments. These extractions are scored using a binary classifier, with the score reflecting our confidence in the extracted fact. We perform a lossless decomposition of the nary relations extracted, to construct the KG. While some components of the the pipeline are customized to the financial domain, we believe with small tweaks it can be easily adapted to other domains. Compared with (Benetka et al., 2017), the most closely related work, our pipeline extracts over 342,000 n-ary facts and covers more types of financial predicates -a total of 87 as opposed to 50 in (Benetka et al., 2017). Furthermore, our pipeline produces high precision extractions, specifically 78% at the top-100 extractions, as opposed to 34% of the pipeline from (Benetka et al., 2017). In summary, our main contributions are as follows. We design a high precision knowledge extraction pipeline tailored to the financial news domain. Our pipeline combines SRL and pattern based information extraction to extract domain targeted noun/verb-mediated relations. We develop a Conditional Random Field (CRF) model that identifies and removes sequences of noisy text commonly found in financial news articles. To further improve precision, we build a dictionary of semantically and structurally constrained sense-disambiguated financial predicates to filter out noisy extractions produced by SRL. The ∼380,000 triples we extracted are stored in a KG which can be readily queried. We also conduct ablation studies to examine the effect of the different components of the pipeline on a number of performance metrics. Related Work Cross-domain KGs such as DBpedia, Freebase, NELL, and BabelNet (Navigli and Ponzetto, 2012) contain encyclopedic knowledge covering real world entities across different domains (e.g., people, organizations, and geography). They were either manually curated (e.g., Freebase) or automatically created, from semi-structured textual sources such as Wikipedia infoboxes (e.g., DBpedia), or unstructured text on the web (e.g., NELL). A number of efforts to create domain targeted KGs followed. The Aristo Tuple KB (Mishra et al., 2017) extracted 294,000 high-precision SPO triples using a KG extraction pipeline targeted towards elementary science topics from domain relevant sentences found on the web. In (Wang et al., 2018), the authors developed a framework (CPIE) which extracted relational tuples between 3 fixed types of biomedical entities from PubMed paper abstracts. In the financial domain, Crunchbase curated a KB covering over 100,000 companies, investors, acquisitions and funding rounds, while (Benetka et al., 2017) extracted quintuples of monetary transactions covering 316 economic events. NELL used a predefined ontology of categories and relations and a few seed examples that are used for semi-supervised bootstrap learning of semantic categories of entities and the relations that exist between them. This bootstrapping approach reduces the human labor required to label training data while taking advantage of the huge amount of unlabeled data available. While manually defining an ontology leads to high precision extractions, it requires domain experts and, in an open domain, it is not feasible to define a complete ontology. OpenIE aims to overcome these limitations by extracting all relational phrases in raw surface form in a single pass over the corpus. However, the verb tense and conjugation can change the role of an argument in the relation. SRL helps disambiguate the relations between arguments and their predicates by identifying semantic frames within the sentence and the semantic roles of the arguments. The KB built in (Benetka et al., 2017) used a pipeline that consisted of: (1) a grammar for monetary value recognition, (2) SRL for economic event identification, (3) entity recognition via DBpedia, Crunchbase and Freebase, and (4) date extraction via a temporal tagger. It extracted structured representations of economic events in the form of (subject, predicate, object, monetary value, date) quintuples from the New York Times Annotated Corpus (NYTC) 2 . It ranked all representations of an economic event according to confidence scores learnt using a supervised model. The domain was defined by a list of financial predicates using a semi-supervised method that starts with a set of seed predicates and expands them using WordNet (Miller, 1995). The pipeline only extracts 496 quintuples from 316 economic events over just 2 categories of events, with a precision of 34%. Our work has the following key differences with prior work. We deal with the challenging task of identifying and removing noisy text spans in news articles. Our pipeline combines SRL and pattern based IE in addition to producing implicit extractions from appositions. We build a dictionary of 87 semantically and structurally constrained financial predicates covering broader financial transactions and improving precision. We resolve coordinating conjunctions and do a dictionary-guided minimization to prune overly specific arguments. In all, our approach leads to a large knowledge graph with high precision. The Knowledge Extraction Pipeline In the following subsections, we describe each component of our KE pipeline (see Fig. 1). The pipeline operates at the sentence level and starts with cleaning the news articles by identifying and removing noisy text spans, then performs linguistic annotations, i.e., resolves co-references and identifies named entities. Predicate argument structures are then extracted by the SRL component and passed to the financial predicate dictionary which we built to filter out noisy extractions by the SRL. We produce additional extractions via high-precision typed patterns that are tailored to the financial domain, and by resolving appositions. We maximize the utility of the extractions by minimizing overly specific arguments by processing coordinating conjunctions and financial lexicon guided minimization. Finally, we score the predicate argument structures to reflect our confidence in their precision and conciseness. The input to the pipeline consists of two components: (i) Financial news corpus: US Financial news 3 dataset containing ∼ 306k news articles collected from Bloomberg.com, CNBC.com, reuters.com, and wsj.com between January and May 2018; (ii) Financial Times Lexicon 4 : this lexicon includes thousands of financial words and phrases selected by Financial Times editors (e.g., capital ratio, corporate bond, and free market). We use this lexicon to identify and minimize overly specific arguments as described in the minimization stage. Text Pre-processing & Cleaning (CL). We start with standard NLP cleaning by removing brackets, parentheses, quotes and other punctuation marks. A more challenging and important cleaning task that is unique to our problem is to identify and remove noisy text spans present in the article, such as the publication date, the time the article was last updated, reporters' names, and/or reading time. This information is usually embedded within the article lead and is not separated from the content by any : Entry for acquire.01 in the financial predicate dictionary; "r" is required while "o" is optional. particular separating tokens (see Fig. 2 for an example). The variety in which noise of this kind can manifest in text limits the feasibility of using regular expressions to capture and eliminate such noisy text spans. Additionally, this type of noise can appear anywhere in the article and is not limited to its lead. We cast the task of identifying noisy text spans as a sequence labeling problem where tokens in a sentence are assigned a sequence of labels. We manually annotated a dataset of 779 sentences containing 37% noise and trained a Conditional Random Field (CRF) (Lafferty et al., 2001) to label sequences of tokens using token features that combine information from the surrounding tokens and their partof-speech tags. Specifically, for each token at position t, we extract unigrams and POS tags between positions t − 2 and t + 2 and use combinations of these features to describe the token. Once the text is cleaned, we then resolve references to the same entity using the co-reference resolution system (COREF) integrated in the SpaCy NLP library (Honnibal and Montani, 2017). We identify named entities using the AllenNLP named entity recognition (NER) system (Gardner et al., 2017). Semantic Role Labeling (SRL). We extract semantic relationships between entities using Semantic Role Labeling. As described in Section 1, information about the predicate sense and semantic roles of arguments in a relation are not captured by traditional OpenIE extractors, making them less useful in domain specific IE. Consider the sentence: "Whole Foods was acquired by Amazon in 2017 for $13.7 Billion". OpenIE extracts (Whole Foods; was acquired; by Amazon; in 2017; for $13.7 Billion). While the OpenIE extraction is accurate, it is not useful for answering queries since it is unclear which entity acquired the other entity, or which argument is the price or the date. SRL, on the other hand, extracts acquire.01(agent: Amazon, thing acquired: Whole Foods, price paid: $13.7 Billion, Temporal argument: 2017). SRL not only identifies the correct sense of the predicate as acquire.01 but also identifies the role of each argument. Correctly identifying the sense of the predicate and thematic roles of its arguments helps us impose structural and semantic restrictions to improve the precision. Concretely, the predicate acquire.01 (01 is the predicate sense meaning get) must have at least two arguments, one with the role: entity acquiring something and the other with the role: thing acquired. Further, in the financial domain, we would like to enforce that both arguments have type ORG, i.e., an organization. An optional argument is the the price paid whose type should be MONEY. We use LUND-SRL (Johansson and Nugues, 2008) for extracting and labeling predicate-argument structures. The output from LUND-SRL includes the lemmas, POS tags, and the dependency relations among all tokens in the sentence. Financial Predicate Dictionary Filtering (FPDF). We filter out domain irrelevant predicateargument structures using a dictionary of financial predicates. This dictionary lists the sensedisambiguated predicates along with structural contraints, i.e., required vs optional arguments, and semantic constraints, i.e., the possible entity types (e.g., ORG, MONEY). We construct the dictionary by automatically extracting sense-disambiguated predicates from the corpus and manually selecting the highest frequency ones that are relevant to the financial domain. We expand this set using the FrameNet (Baker et al., 1998) lexical resource. This yields 87 financial predicates. For each of these sensedisambiguated predicates, we determine the required arguments and potential entity types using Propbank semantic roles annotations (Kingsbury and Palmer, 2002). Fig. 3 shows the entry for acquire.01 in the dictionary. It is important to note that this dictionary is different from the financial lexicon we described earlier as an input to the pipeline. As will be described below, the lexicon will guide the minimization of arguments that are considered overly specific, whereas this dictionary filters out predicate argument structures that are either not financially relevant or are not in compliance with the semantic and structural constraints.Many of the SRL extracted relations contain temporal arguments AM-TMP such as "today", "last year", or "3 months ago". We pass these arguments to a date parser library 5 that parses localized dates into a standard date format relative to the publication date of the article. We filter out predicate-argument structures that contain modal arguments (e.g., Google could have acquired Facebook) or negated arguments (e.g., Google did not acquire Facebook) since these structures are unlikely to represent facts. Furthermore, we only include structures where the predicate is in past tense (e.g., Google acquired Youtube). We also filter out predicate-argument structures with adverbial arguments (AM-ADV) representing adverbs of negation such as "hardly", "never", "almost" since they do not represent positive facts (e.g., Yahoo almost acquired Facebook). Appositions (APPOS). We produce implicit extractions from appositions. Consider the sentence: "Dubai-based port operator DP World announced plans to transport cargo". The relation: isA(DP World, Dubai-based port operator), is extracted by following the APPO dependency relation between "operator" and "World". Coordinating Conjunctions (CC). We process coordinating conjunctions (CC) which join similar syntactic units (i.e., conjoints) into larger groups by means of CC such as and/or. Consider the sentence: "HFF , HFF Securities L.P. and HFF Securities Limited are owned by HFF Inc.". The argument "HFF, HFF Securities L.P. and HFF Securities Limited" has three conjoints and is considered overly specific. Extracting conjoints produces three relations with simpler arguments instead of one long argument. We find the CC using the dependency relations coord and conj. Argument Validation (AV). Not only does the financial dictionary filtering step help ensure that the extractions are financially relevant, but its semantic and structural constraints also help eliminate predicate-argument structures that are incorrectly labeled by the SRL system. E.g., SRL correctly identifies the labels of (Amazon) and (Whole Foods) in the sentence "Whole Foods was acquired by Amazon in 2017 for $13.7 Billion". Both of the arguments satisfy the entity type constraint ORG. Similarly, the arguments ($13.7 billion), (2017) are correctly identified and both have the correct types, i.e., MONEY and DATE, respectively. Consequently, all arguments pass the argument validation step. By contrast, consider the sentence "Israeli-Palestinian relations sank to a low...". The SRL extracts sink.01:(thing sinking: Israeli-Palestinian relations, end point, destination: to a low). However, in the financial predicate dictionary, the end point, destination of the predicate sink.01 must of one of the types (MONEY, QUANTITY, CARDINAL or PERCENT). As a result, this extraction is rightfully filtered out by this step. Pattern Extraction (PTRN). In addition to the verb mediated relations extracted via SRL, we extract noun mediated relations via a pattern based extractor. The patterns are similar to those in the part-ofspeech and noun chunks extractor (Pal and others, 2016), except that we add entity type constraints to the patterns, i.e., ORG and PER. This yields high precision extractions through patterns that are commonly found in financial news. Furthermore, it facilitates segmenting compound relational nouns that are not preceded by a demonym, i.e., words derived from the name of a place to identify its residents or natives), e.g., Canadian, North American, etc. For the patterns that do contain demonyms, we use the demonymlocation table from (Pal and others, 2016). Overall, we extract 11 pattern types. Due to space limitations, we show the three most common pattern types in the corpus in Table 1. We create a string that replaces tokens with POS tags, entity types or demonyms, then check if it matches one of the patterns types. Argument Minimization (MIN). In addition to processing coordinating conjunctions, we minimize arguments even further by identifying and dropping additional tokens that are considered overly specific. For this, we drop tokens that are considered safe to drop such as determiners, possessives, and adjectives modifying named entity PER (except demonyms). We then perform a dictionary-guided minimization of the noun phrase pattern [adverbial|adjective] + N oun + similar to the dictionary mode of (Gashteovski et al., 2017), except we use the Financial Times lexicon in place of the dictionary of frequent subjects, relations and arguments found in their corpus. This ensures that we do not drop tokens that are meaningful and important in the financial context. E.g., consider the sentence "Mikros Systems Corporation , an advanced technology company, announced...". We extract isA(Mikros Systems Corporation, an advanced technology company). We then drop the determiner "an", and proceed with enumerating sub-sequences of the noun phrase pattern instance advanced technology company and query its sub-sequences against the financial lexicon. Since "advanced company" is not found in the lexicon, we drop advanced from the argument. Fact Scoring (SCORE). We score the predicate argument structures to reflect our confidence, by training a binary logistic regression classifier using over 1400 SRL extractions which we manually labeled. Extractions are considered valid if they are both precise and concise, i.e., explain only one proposition. We identified a collection of features that are powerful predictors of validity. The features include the presence of a coordinating conjunction, apposition, or verb, unresolved temporal arguments, pronouns, determiners, bad characters, and the predicate and named entities in the argument. We classify each valid argument of the extracted fact and take the minimum over all argument scores as the overall confidence score of the fact. Using the minimum aggregate function promotes the most precise extractions. Evaluation We run our KE pipeline on the US Financial News corpus. Fig. 4(a) shows the distribution of news article length in the corpus, measured by the number of sentences. For ease of exposition, the figure is trimmed by eliminating the distribution's long tail. Observe that most of the articles have fewer than 20 sentences and articles with more than 60 sentences in length are very rare. Fig. 4(b) shows the distribution of named entity types in the corpus. As expected in a financial news corpus, the top 4 entity types, ORG, CARDINAL, MONEY, and PERSON account for almost 80% of the unique named entities in the corpus. We report the results and compare with the work in (Benetka et al., 2017) which extracts (subject, predicate, object, monetary value, date) quintuples from the New York Times Annotated Corpus (NYTC) 6 . Further, we compare the functionalities of our pipeline against (Benetka et al., 2017) in Table 3. Finally, we illustrate a small subgraph that answers a query posed to the large extracted KG. Extraction statistics. To demonstrate the effectiveness of the pipeline, we report in Table 2, a number of extraction statistics resulting from the processing of 288,118 articles. A total of 342,181 tuples were extracted by the SRL, pattern and apposition modules from 201,731 sentences, constituting 5.2% of all sentences in the corpus with average arity of 2.27. On the other hand, the pipeline in (Benetka et al., 2017) extracted only 496 quintuples from 2.1M sentences (of which only 18.2% describe economic events) from 1.8M articles. We found that 94.7% of the predicate-argument structures that were eliminated did not pass the financial predicate filtering step. This indicates that the financial dictionary filtering likely had the greatest impact on the total number of facts extracted by the pipeline. 7 It also suggests that the vast majority of the sentences in the corpus do not contain financially relevant facts. Another 3.69% of the relations did not pass the syntactic requirements whereas 1.52% did not pass the argument validation step. This suggests that the semantic and structural constraints on the predicates do not play a major role in filtering candidate predicate argument structures, hence relaxing these constraints would not substantially increase the number of extractions. More than half of the facts were implicitly extracted via appositions. The SRL module extracted over 161,000 predicate argument structures contributing to 47.83% of the facts, whereas ∼1.3% were noun mediated relations extracted via patterns. It is important to note that of the ∼11,000 pattern extracted facts, only 4454 are distinct. The minimization module responsible for condensing overly specific arguments dropped over 427,000 tokens. The majority of the tokens dropped this way were due to safe minimizations, i.e. determiners or possessives. The rest of the tokens were dropped by the dictionary-guided minimization. The top bigram lexicon hits were quarterly dividend (queried 372 times), followed by common stock and net income, and subsequently the tokens of these bigrams were marked as stable. The adjectives in these bigrams, i.e., quarterly, net, and common are critical in financial context. Thus, simply dropping adjectives would result in the loss of important information, and this is avoided by querying against the financial lexicon. This emphasizes the importance of the financial lexicon in preserving tokens that are important in the financial context while minimizing overly specific arguments. Precision. We ranked the extractions according to their confidence scores and examined the top 150 extractions and manually labeled them. The ratio of the correct extractions in the top 50, top 100, and top 150 extractions, i.e., Precision@50, Precision@100, and Precision@150 is 78%, 78%, and 79.33% respectively. The pipeline in (Benetka et al., 2017) has a much lower precision of 34% (at a recall of 20%). KG Statistics. To build the KG, we break down each n-ary relation into binary relations (triples) identified by predicate.sense-id (e.g., announce.01-0.8926653297234465). Fig. 5(a) shows the relation acquire.01(agent: Merck, thing acquired: Cubist Pharmaceuticals Inc., AM-TMP: 04/04/2015) decomposed into 3 relations -acquired by 0:(thing acquired: Cubist Pharmaceuticals Inc., agent: Merck), acquired in 0:(thing acquired: Cubist Pharmaceuticals Inc., AM-TMP: 04/04/2015), and acquired in 0:(agent: Merck, AM-TMP: 04/04/2015), where the suffix 0 is the relation id. The ID ensures lossless decomposition of an n-ary relation and helps to identify the different arguments. The resulting KG has 248,923 nodes, 474,837 edges (which becomes 380,079 edges after eliminating redundancies), and 31,144 weakly connected components (WCC), i.e. subgraphs where each pair of nodes is connected by a path, ignoring edge directions. The diameter, i.e., the longest distance between a pair of vertices, of the largest WCC is 19. The KG has an average degree of 1.5269. Predicate Distribution. Fig. 5(b) shows the distribution of the top 10 financial predicates. The predicate announce.01, which describes the semantic frame "Statement" makes up 17% of the total SRL facts. Out of the top 20 predicates extracted, four: increase.01, rise.01, fall.01, and decrease.01 describe the semantic frame "Change position on scale" which is usually associated with stocks reporting. This semantic frame, among others including issue.01, launch.01 and name.01, were not captured in the ontology of (Benetka et al., 2017), which was limited to just two types of economic events. Pattern Distribution. The most common pattern, i.e. the pattern that extracted the most facts, is Demonym-ORG CRN (an instance of this pattern is the fact: Healthcare group(Sanofi, France) which was extracted 23 times), followed by Demonym-PER CRN (e.g., President(Donald Trump, United-States), 8 extracted 1026 times), and ORG-PER CRN (e.g., Secretary(Steven Mnuchin, Treasury), extracted 81 times). These three pattern types account for over 90% of the pattern extracted facts. Fig. 5(c) shows the top 10 relational nouns extracted via patterns. Illustrating the KG. The entire knowledge graph is huge, thus for the purpose of illustration, we show a small subgraph of the KG that we extracted in Fig. 5(a). Consider the query "Which drugmakers were acquired by a German drugmaker?" (Query (1), Section 1) posed on the entire KG. The subgraph in Fig. 5(a) presents a subset of the answers to this query in the form of a graph. It says that Merck acquired both Cubist Pharmaceuticals and Medco along with details about date of acquisition and amount paid, where available. Ablation Studies We conduct ablation studies on a random sample of 1000 articles to gauge the effect of each stage of the pipeline on the overall performance. The results are reported in Table 4. Bolded cells in each column capture the most significant impact of turning off the corresponding module. Turning off the cleaning module (column CL) results in including noisy text spans and the overall #extractions drops (48.5% drop). More importantly, the precision in the top 50 and top 100 extractions drops significantly -by 12% and 4% respectively. Turning off the co-reference stage (column COREF) results in shorter sentences passing the sentence length filtering stage, yielding more sentences. The precision@50 increases by 4%. However, the total number of facts drops by 4.2% as significantly fewer SRL facts are extracted (since references are not resolved). Turning off the financial predicate filtering stage (FPDF) results in 18.2% increase in the total number of facts. However, that comes at a price of 68% loss in precision@50. Turning off the coordinating conjunctions stage (CC) results in a 2% increase in precision@50 for the price of a 16.5% drop in the total number of facts. It also results in a 4.2% increase in the average argument length. This demonstrates the effectiveness of CC in minimizing overly specific arguments. Turning off appositions extraction (APPOS) yields a 2% (resp. 4%) increase in precision@50 (resp. precision@100), at the expense of a 53% drop in the overall number of extracted facts. Turning off the minimization module (MIN) results in a 13.1% increase in the average argument length. This indicates the significance of this module in minimizing overly specific arguments while preserving financially relevant parts owing to the financial lexicon guided minimization. Compared to the full pipeline (NONE), turning off any single module does not prune away the financial facts, except turning off FPDF shrinks the financial facts to a mere 14.1%, attesting to the crucial role this module plays. 6 Discussion and Future Work Our pipeline was effective in extracting predicate-argument structures from sentences with long range dependencies. E.g., from the sentence 9 "Orb Energy, an Indian solar company backed by U.S. venture capital fund Acumen Fund Inc, secured $10 million in OPIC financing last year for commercial rooftop projects.", our pipeline extracted the fact: secure.01(entity acquiring something: 'Orb Energy', thing acquired: '$10 million', source: 'rooftop projects', AM-TMP: '02/14/2017'). The overly specific argument commercial rooftop projects is minimized by dropping the token commercial, as none of its sub-sequences is found in the financial lexicon. Furthermore, the pipeline successfully identifies and classifies the roles of different arguments to a fine granularity. E.g., from the sentence "Parke Bancorp's net loans increased to $1.01 billion at December 31, 2017 , from $852.0 million at December 31 , 2016 , an increase of $159.8 million or 18.8% .", we extracted increase.01(thing increasing: 'Parke Bancorp 's loans', start point: '$852.0 million', end point: '$1.01 billion', AM-TMP: '12/31/2017'). Such granularity is useful in answering complex queries such as "find a contiguous sequence of stock prices that are all increasing (or all decreasing)". The pipeline successfully classifies less common predicate senses in difficult contexts: e.g., settle.02 (meaning resolve) versus settle.01 (meaning decide), and cut.02 (meaning reduce) versus cut.01 (meaning slice). Correct classification of predicate sense is critical for assigning correct semantic roles to the arguments. Our pipeline extracted ∼342,000 n-ary facts from only 5.2% of the sentences. One way to extract more facts is by expanding the financial predicate dictionary, while adding semantic and structural constraints on the new predicates to improve precision. The ablation studies show that the financial predicate filtering was the most important factor for precision. Furthermore, the cleaning stage presents a significant trade off in precision and total number of extractions. Depending on the downstream applications, we may want to favor precision over number of extractions or vice versa. The study also highlighted the significance of the cleaning stage in the overall precision and the importance of resolving coordinating conjunctions and appositions in generating more facts without sacrificing the precision. The ablation study shows the importance of the minimization stage in decreasing the average argument length. We would like to examine expanding the minimization beyond the safe and dictionary minimization of adverbial patterns. Prepositional phrases, although good candidates for minimization, are equally challenging. It is also common to see arguments such as "more than 3%", "as much as 3 percent", "almost 3%" in reporting stocks. We would like to canonicalize these arguments into > 3% or bin them into numeric ranges in order to maximize their utility in downstream applications. In future work, we would like to examine the transferability of the pipeline to other domains. The following adjustments will be needed: (1) a domain targeted dictionary for filtering candidate predicate argument structures; (2) a domain targeted lexicon for dictionary minimization; and (3) supervised models for cleaning and fact scoring trained on datasets from the target domain. We would also like to explore methods to build a domain targeted dictionary using corpus level statistics and/or learn models that incorporate consistency constraints and automatically identify fact relevance using triple relevance features. (c) Top 10 relational nouns. Figure 5 techniques. We built a financial predicate dictionary that places structural and semantic constraints on arguments to produce high quality extractions. To enhance the utility of the extractions, we minimized overly specific arguments by processing coordinating conjunctions and appositions, and employed a financial lexicon to minimize adverbial nouns. We evaluated the pipeline and the resulting KG on a number of metrics and conducted ablation studies to examine the effect of different modules of the pipeline on these metrics. This study offered a number of insights and demonstrated the importance of both the financial predicate dictionary filtering and the noisy text cleaning stages in the overall precision of the pipeline.
7,593.6
2020-12-01T00:00:00.000
[ "Computer Science" ]
α-Synuclein activation of protein phosphatase 2A reduces tyrosine hydroxylase phosphorylation in dopaminergic cells α-Synuclein is an abundant presynaptic protein implicated in neuronal plasticity and neurodegenerative diseases. Although the function of α-synuclein is not thoroughly elucidated, we found that α-synuclein regulates dopamine synthesis by binding to and inhibiting tyrosine hydroxylase, the rate limiting enzyme in dopamine synthesis. Understanding α-synuclein function in dopaminergic cells should add to our knowledge of this key protein, which is implicated in Parkinson's disease and other disorders. Herein, we report a mechanism by which α-synuclein diminishes tyrosine hydroxylase phosphorylation and activity in stably transfected dopaminergic cells. Short-term regulation of tyrosine hydroxylase depends on the phosphorylation of key seryl residues in the amino-terminal regulatory domain of the protein. Of these, Ser40 contributes significantly to tyrosine hydroxylase activation and dopamine synthesis. We observed that α-synuclein overexpression caused reduced Ser40 phosphorylation in MN9D cells and inducible PC12 cells. Ser40 is phosphorylated chiefly by the cyclic AMP-dependent protein kinase PKA and dephosphorylated almost exclusively by the protein phosphatase, PP2A. Therefore, we measured the impact of α-synuclein overexpression on levels and activity of PKA and PP2A in our cells. PKA was unaffected by α-synuclein. PP2A protein levels also were unchanged, however, the activity of PP2A increased in parallel with α-synuclein expression. Inhibition of PP2A dramatically increased Ser40 phosphorylation only in α-synuclein overexpressors in which α-synuclein was also found to co-immunoprecipitate with PP2A. Together the data reveal a functional interaction between α-synuclein and PP2A that leads to PP2A activation and underscores a key role for α-synuclein in protein phosphorylation. Generation of stably transfected inducible PC12 cell lines α-Syn was cloned into pcDNA3 as previously described (Stefanis et al., 2001) followed by subcloning into the SKSP shuttle cloning vector using HindIII-XhoI. From this vector, SfiI and PmeI were used to subclone α-Syn into the ecdysone-inducible PBWN vector, downstream of the response element. These constructs form the basis of the 'bomb system' for inducible transgene expression (Suhr et al., 1998;Suhr et al., 2001) and were generously provided by Fred Gage and Steve Suhr at the Salk Institute (La Jolla, CA, USA). PC12 cells were transfected with PBWN-α-Syn using Lipofectamine 2000 (Invitrogen, Carlsbad, CA, USA), following the manufacturer's recommendations. One week after transfection, selection was begun in 500 μg/ml G418 and individual clones selected. Tebufenozide, which is a molt-inducing insecticide that mimics the action of ecdysone (Addison, 1996), was used for induction of α-Syn expression. PC12 clone, Sm1 (plasmid) was only slightly inducible after 2.0 μM tebufenozide for 72 hours (Fig. 1A, right side, lane 2), making these cells excellent baseline controls for our highly inducible clone, Sm4 (α-Syn; Fig. 1A, right side, lane 3), which expressed high levels of α-Syn expression after tebufenozide treatment (100 nM-2.0 μM, 24-72 hours). Untransfected (UT) PC12 cells were utilized as additional baseline controls. The data presented on PC12 plasmid and α-Syn cells were obtained from induced cells. Immunoblotting, antibodies and densitometry Cell lysates prepared in 1% NP40 buffer containing protease inhibitors were sonicated for ∼5 seconds and particulates were eliminated by centrifugation at 15,000 g for 10 minute at 4°C as previously described (Perez et al., 1999). Protein concentrations were determined using the BCA assay (Pierce, Rockford, IL, USA) and spectrophotometry. Proteins in Laemmli sample buffer were boiled and 20 μg of protein were separated by 10% or 12% Trisglycine SDS-PAGE and transferred to nitrocellulose. Prestained protein standards were used to determine the relative molecular mass of proteins. Equivalent sample loading was further confirmed using Ponceau S staining. Immunoblots were blocked in 5% nonfat dry milk in Tris-buffered saline, then incubated with primary antibodies at 4°C overnight. Primary antibodies included antisynuclein-1 antibody (Syn-1, α-synuclein 610786, BD Bioscience-Transduction Laboratories), TH phospho-ser40 (Chemicon AB5935, Temecula, CA, USA), total TH (Chemicon MAB318, Temecula, CA, USA), PKA catalytic subunit (Calbiochem 539231, La Jolla, CA, USA) PP2A catalytic subunit antibody (Upstate 1D6, Lake Placid, NY, USA). Secondary antibodies were peroxidase-coupled anti-mouse or anti-rabbit (Calbiochem, La Jolla, CA, USA). Data were visualized on Biomax-MR film (Kodak, Rochester, NY, USA) after chemiluminescence (Dupont NEN, Boston, MA, USA). The optical densities of the bands were quantitated using MCID (Imaging Research Inc., St Catharines, Ontario, Canada) or ImageQuant software (Amersham Biosciences). All data measuring phosphorylated Ser40 levels, were normalized to total TH. Co-immunoprecipitation For co-immunoprecipitation all steps were carried out at 4°C. Adult rat striata were collected, weighed and homogenized in 5 volumes of ice-cold co-IP buffer, which contained 50 mM Tris pH 7.4, 100 mM NaCl, 5 mM EDTA, 0.3% Triton X-100, 10% glycerol plus aprotinin, leupeptin, 4-(2-aminoethyl) benzenesulfonyl fluoride (AEBSF), βglycerophosphate, and dithiothreitol to inhibit protease and phosphatase activities. Supernatants were collected after centrifugation at 17,000 g (Sorvall RC5B, Kendro Laboratory Products, Newtown, CT, USA). A control aliquot of each supernatant was separated and frozen prior to co-IP for total protein determinations. Samples were pre-cleared for 1 hour with 10 μl 1% BSA plus 25 μl each of protein A-and protein G-Sepharose beads (Zymed Laboratories, S. San Francisco, CA, USA). Immunoprecipitating antibodies (5 μg) were coupled to Sieze™ X beads according to the manufacturer's instructions (Pierce, Rockford, IL, USA). Equal aliquots of homogenate (5.0 mg/ml total protein) were incubated with antibodies or pre-absorbed antibodies. Immune complexes were eluted, separated on 10 or 15% Tris-glycine SDS-PAGE gels, transferred to nitrocellulose, reacted with the same primary antibodies described above, and visualized by chemiluminescence. MN9D and PC12 cell extracts were prepared using the same buffers and conditions described above except that antibodies were not coupled to Sieze™ X beads. As the α-synuclein and PP2A antibodies are both of mouse origin, we saw IgG heavy (55 kDa) and light chains (25 kDa) in addition to antigens and some nonspecific bands in some experiments (indicated by asterisks in the figure). α-Synuclein activates PP2A PP2A assay PP2A immunoprecipitation and activity were determined using a nonradioactive kit according to the manufacturer's instructions (cat. no. 17-127, Upstate Biotechnologies, Lake Placid, NY, USA). Lysates of MN9D and PC12 cells were prepared in 20 mM imidazole-HCl, 2 mM EDTA, 2 mM EGTA, pH 7.0 with aprotinin, benzamidine and AEBSF (all from Sigma-Aldrich, St Louis, MO, USA). Protein concentrations were determined using BCA and 0.5-1.0 mg protein was immunoprecipitated using an anti-PP2A catalytic subunit antibody (cat. no. 06-222, Upstate, Lake Placid, NY, USA) and protein A-Sepharose beads (Zymed Laboratories, South San Francisco, CA, USA). Equivalent immunoprecipitation of PP2A from all samples was confirmed by immunoblot. Immunoprecipitated PP2A was then tested for activity in a 10-minute reaction at 37°C, in which phosphopeptide (K-R-pT-I-R-R) dephosphorylation was assayed spectrophotometrically at 650 nm using Malachite Green. PP2A activity was determined for all samples relative to a phosphate standard curve with activity expressed as pmol incorporated phosphate/minute/μg protein. Okadaic acid treatment Okadaic acid binds to the catalytic subunit of PP2A and inhibits its activity. Although okadaic acid at high concentrations can inhibit PP1 as well as PP2A, it is well documented that dephosphorylation of TH Ser40 occurs almost exclusively by PP2A, not by PP1 activity (Berresheim and Kuhn, 1994;Dunkley et al., 2004;Haavik et al., 1989;Leal et al., 2002). Furthermore, we treated cells with low to high dose okadaic acid (5 nM-1 μM) dissolved in DMSO (0.13 μM) (Garcia et al., 2002;Haavik et al., 1989) for 1 hour to assess the impact on PP2A inhibition in the presence of α-Syn overexpression and saw a similar effect. For baseline TH Ser40 phosphorylation, cells were treated with 0.13 μM DMSO for 1 hour without okadaic acid. Cell lysates were prepared, protein concentrations determined, and 20 μg of protein from each condition was separated by SDS-PAGE for immunoblotting and densitometry. Equal protein loading was also confirmed with Ponceau S staining of blots prior to antibody incubation. Phosphorylated Ser40 levels were normalized to total TH for all treatment conditions, providing an internal standard for each measure of P-Ser40 on TH. For α-Syn inducible PC12 cells, data were also normalized to relative α-Syn levels within treatment conditions. Statistical analyses Independent sample t-tests, linear regression, and one way ANOVA were performed using SPSS (SPSS Inc., Chicago IL, USA) or Instat (Graphpad, San Diego, CA, USA) software. Post hoc analyses were performed by the method of Tukey-Kramer for data significant at P<0.05 or better. Experiments were repeated a minimum of two to three times on separate occasions with some experiments being performed five or more times. Data are presented as the mean ± s.e.m. for all treatments. Results Increased α-Syn expression reduces Ser40 phosphorylation in dopaminergic cells Ser40 phosphorylation is a major contributor to both TH activity and DA synthesis, and using a highly specific antibody to label P-Ser40 we measured Ser40 phosphorylation in stably transfected α-Syn-overexpressing MN9D cells compared to untransfected (UT) cells and plasmid transfected (plasmid) control MN9D cells. Equal amounts of protein were separated by SDS-PAGE and analyzed by immunoblotting for TH P-Ser40 levels followed by reprobing for total TH to normalize P-Ser40 data, and for α-Syn to confirm α-Syn expression levels. Some α-Syn is expressed in all MN9D cells but high levels are only apparent in α-Syn-transfected cells relative to UT or plasmid-transfected MN9D control cells (Fig. 1A, left side). All MN9D cells also expressed abundant TH (Fig. 1B, left side); however, we sometimes saw a trend toward reduced TH levels in the α-Syn-overexpressing cells. To control for any variability in total TH levels we normalized P-Ser40 levels to total TH in all experiments. We observed a significant reduction in P-Ser40 only in α-Syn-overexpressing cells (Fig. 1C, lane 3, left side). When we plotted the data from multiple experiments, there was a significant decrease in P-Ser40 only in the α-Syn-overexpressing MN9D cells (Fig. 1D, left graph). These data revealed that when α-Syn levels increased, P-Ser40 levels decreased significantly in MN9D cells, suggesting that one means by which α-Syn inhibits TH activity is by inhibiting TH Ser40 phosphorylation. To confirm that the effect on P-Ser40 was associated with α-Syn levels in dopaminergic cells we generated additional clonal cell lines in which the expression of α-Syn was under the control of an inducible promoter. Using these induced PC12 cell lines we again measured the impact of α-Syn on P-Ser40. UT and plasmid PC12 cells had little α-Syn compared to the α-Syn-overexpressing PC12 cells, which had up to 20fold more α-Syn than plasmid control cells when cells were induced for 72 hours (Fig. 1A, right side). Total TH was equivalent in all PC12 cells (Fig. 1B, right side) confirming that the increase in α-Syn did not alter TH expression in stably transfected PC12 cells. When we compared P-Ser40 levels between UT, plasmid and α-Syn PC12 cells we observed that while controls maintained equally high P-Ser40 levels, the α-Syn PC12 cells had reduced P-Ser40 levels (Fig. 1C, right side) similar to that observed in α-Syn-overexpressing MN9D cells (Fig. 1C, left side). When data from multiple experiments were plotted we again saw a large decrease in P-Ser40 levels only the α-Syn PC12 cells (Fig. 1D, right side) similar to the effect observed in MN9D cells (Fig. 1D, left side). To further probe the relationship between α-Syn overexpression and reduced TH Ser40 phosphorylation, we treated inducible α-Syn PC12 cells with different amounts of inducer. When analyzed by linear regression we identified a significant negative correlation between α-Syn and P-Ser40 (r=-0.93, n=15, P=0.0017). Taken together the data from both MN9D cells and inducible PC12 cells indicate that the phosphorylation of TH Ser40 is negatively regulated by α-Syn in dopaminergic cells. This observation led us to further explore how α-Syn contributed to TH dephosphorylation. Overexpression of α-Syn does not alter PKA protein levels or activity Since PKA is the major kinase mediating Ser40 phosphorylation and because α-Syn is so strongly implicated in enzymatic inhibition, e.g. ERK2, PLD2, TH (Iwata et al., 2001;Jenco et al., 1998;Perez et al., 2002) we first hypothesized that the reduction in TH Ser40 phosphorylation was probably occurring by α-Syn inhibition of PKA. To test this we measured PKA protein levels in UT, plasmid transfected, and α-Syn-overexpressing MN9D cells and found them to be equivalent (data not shown). We then measured PKA activity and confirmed similar activity in all cells regardless of α-Syn levels (data not shown) revealing that the reduction in P-Ser40 in α-Syn-overexpressing cells was not due to PKA inhibition. We then turned out attention to PP2A, the phosphatase that dephosphorylates P-Ser40 on TH (Haavik et al., 1989;Leal et al., 2002). Overexpression of α-Syn does not alter PP2A protein levels We measured PP2A levels from control and α-Synoverexpressing MN9D and PC12 cells by immunoblotting (Fig. 2). Equal amounts of total protein were evaluated for all conditions and revealed that PP2A levels in UT, plasmid and α-Syn MN9D cells were unchanged (Fig. 2, left side) as were PP2A levels in the PC12 cell lines (Fig. 2, right side). These data reveal that α-Syn overexpression did not alter PP2A protein levels in our dopaminergic cells as can be further appreciated when graphs of data from several experiments are examined (Fig. 2). α-Syn increases PP2A activity and binds to the PP2A catalytic domain We measured PP2A activity from the MN9D and induced PC12 cells using a well-established immunoprecipitation and phosphatase activity protocol (Begum and Ragolia, 1996). As we found identical data for control UT and plasmid cells in all previous experiments, we utilized plasmid cells as baseline controls for this series of experiments. We first confirmed that equal amounts of PP2A had been immunoprecipitated with an antibody to the PP2A catalytic subunit (Fig. 3A). We then measured the activity of the immunoprecipitated PP2A and found a doubling of PP2A activity in α-Syn MN9D cells compared to plasmid transfected control MN9D cells (Fig. 3B left side). We observed a similar increase in PP2A activity in α-Syn PC12 cells (Fig. 3B, right side). We then tested for an interaction between α-Syn and PP2A in our cells by a co-immunoprecipitation assay. α-Syn was found to co-immunoprecipitate with PP2A from MN9D cells (not shown) and from inducible PC12 cells (Fig. 4A) confirming that soluble α-Syn interacts with PP2A as measured using the Syn-1 antibody for coimmunoprecipitation. We tested the association between α-Syn and PP2A in rat striatum (Fig. 4B), confirming that the proteins also interact when α-Syn and PP2A are expressed at endogenous levels. Taken together these data imply that α-Syn interacts with PP2A to significantly increase PP2A activity in both MN9D and PC12 dopaminergic cells and that this activation of PP2A contributed to a reduction in P-Ser40 levels on TH in our cells. Inhibiting PP2A produces robust phosphorylation of TH Ser40 only in α-Syn cells To assess whether TH Ser40 phosphorylation would increase after PP2A activity was inhibited, we treated control and α-Syn-overexpressing cells with the phosphatase inhibitor, okadaic acid, which inhibits PP2A with an IC 50 of 10 nM. We prepared cell extracts from MN9D and PC12 cells and measured increases in P-Ser40 levels after okadaic acid treatment (5 nM-1.0 μM). Okadaic acid treatment resulted in significant increases in P-Ser40 levels in all cells relative to vehicle-treated parallel cultures. This increase was expected because dephosphorylated Ser40 residues are present on TH in all cells. When α-Syn MN9D cells were treated with low dose okadaic acid (10-100 nM) large significant increases in P-Ser40 levels were noted in α-Syn-overexpressing cells (Fig. 5A). A large increase in P-Ser40 was also observed in PC12 α-Syn cells (Fig. 5B, P<0.01). When we quantified the relative increases in P-Ser40 after okadaic acid treatment for MN9D and PC12 cells, we found nearly identical effects in both dopaminergic cell lines (MN9D=2.2±0.72; PC12=2.84±0.57, P>0.05). We noted that the magnitude of the increase in P-Ser40 levels when PP2A activity was inhibited was greatest in α-Syn-overexpressing cells (Fig. 5A,B), which had low baseline levels of P-Ser40 (see Fig. 1C, lane 3) and in which PP2A activity was significantly elevated (Fig. 3B). Since it is well-documented that Ser40 dephosphorylation occurs almost exclusively by PP2A (Berresheim and Kuhn, 1994;Dunkley et al., 2004;Haavik et al., 1989;Leal et al., 2002) our findings indicate that (1) TH Ser40 is dephosphorylated by PP2A in dopaminergic cells, and (2) the Ser40 residue on TH remains accessible to PP2A even when α-Syn is overexpressed. All in all, these data provide the first indication that α-Syn contributes to PP2A activation, which has potential relevance to synucleinopathies. Fig. 4. Interaction of α-Syn and PP2A in cells and in rat brain. (A) α-Syn PC12 cell lysates were co-immunoprecipitated and proteins were separated by SDS-PAGE and the western blots (WB) were reacted with α-Syn antibody (α-S WB) or PP2A antibody (PP2A WB). Left blot shows the levels of α-Syn in cells from the initial homogenate (lane 1), in the homogenate after coimmunoprecipitation (co-IP; lane 2), and α-Syn immunoprecipitated with the Syn-1 antibody (lane 3). Right blot in A was prepared from the same co-IP sample with the levels of PP2A in the initial homogenate (lane 1), PP2A in the homogenate after co-IP (lane 2), and PP2A co-immunoprecipitated along with α-Syn using the Syn-1 antibody (lane 3). Non-specific bands, two of which appear to be IgG bands (asterisks) are evident in both blots. (B) The α-Syn WB reveals α-Syn immunoprecipitated from rat striatum (left lane) and in the PP2A WB the PP2A that co-immunoprecipitated with α-Syn using the Syn-1 antibody (right lane). Relative molecular mass (M r , ϫ10 -3 ) was determined from prestained standards. α-S WB, αsynuclein Syn-1 antibody-reacted western blot; PP2A WB, PP2A antibody-reacted western blot. Fig. 5. Blockade of PP2A activity with okadaic acid demonstrates a role for PP2A in Ser40 dephosphorylation in MN9D and PC12 cells. PP2A inhibition by 10 nM-1.0 μM okadaic acid for 1 hour, caused an increase in P-Ser40 levels in all conditions when compared to parallel vehicle-treated controls for each condition. The baseline value was set to zero to demonstrate the fold increase in P-Ser40 levels above baseline for each condition. (A) Okadaic acid at low doses (0-100 nM) produced small changes in P-Ser40 in plasmid control MN9D cells but large significant increases in P-Ser40 in α-Syn-overexpressing MN9D cells. Vehicle-treated cells were essentially unchanged from baseline at these concentrations of okadaic acid. (B) With higher dose okadaic acid, even more robust increases in P-Ser40 were noted for α-Syn MN9D and induced α-Syn PC12 cells, suggesting that low baseline P-Ser40 phosphorylation levels in α-Syn overexpressors had probably occurred through effects on PP2A activation. Values are mean ± s.e.m. of two to six independent experiments. *P<0.01. Discussion To further elucidate normal α-Syn function and the mechanism by which α-Syn regulates TH phosphorylation and DA synthesis we performed the studies described above and made the remarkable discovery that α-Syn contributes to the regulation of PP2A phosphatase activity, leading to PP2A activation. We noted that increased α-Syn expression produced a several-fold decrease in TH Ser40 phosphorylation in the MN9D and PC12 cells with elevated α-Syn levels. This was further assessed by measuring a dose effect of α-Syn on P-Ser40 reduction in the inducible PC12 cells in which we found a significant negative correlation between elevated α-Syn levels and a diminution of P-Ser40. Thus, using two independent dopaminergic cellular models we showed that only when α-Syn levels were elevated, whether by constitutive or inducible overexpression, did we see significant decreases in P-Ser40 levels. Because α-Syn reportedly inhibits multiple enzymatic activities (Iwata et al., 2001;Jenco et al., 1998;Perez et al., 2002) we had anticipated identifying a role for α-Syn as an inhibitor of PKA, which proved not to be the case. We found no change in either PKA protein levels or PKA activity in cells overexpressing α-Syn, confirming that the decrease in Ser40 phosphorylation in α-Syn-overexpressing cells was not due to an effect of α-Syn on PKA kinase activity. We therefore turned our attention to PP2A, the enzyme that is responsible for P-Ser40 dephosphorylation on TH. Haavik and colleagues originally showed that PP2A, the major serine/threonine phosphatase that regulates many signaling pathways in mammalian cells, is responsible for greater than 90% of the dephosphorylation of TH at Ser40 (Haavik et al., 1989). Incubation of adrenal chromaffin cells with okadaic acid in the aforementioned study, dramatically increased TH phosphorylation and TH activity, firmly establishing PP2A as a regulator of both Ser40 phosphorylation and TH activity in dopaminergic cells. More recently, using PP1-and PP2A-specific inhibitors to measure TH dephosphorylation in brain, Dunkley and colleagues (Leal et al., 2002) reconfirmed the role of PP2A in both TH activity and phosphorylation state. In our studies we found that PP2A protein levels were not altered by α-Syn overexpression in dopaminergic cells, yet PP2A activity was significantly increased in α-Syn-overexpressing cells. To further verify that the effects on Ser40 were associated with changes in PP2A activity we treated cells with okadaic acid, and found that Ser40 phosphorylation became significantly elevated in cells overexpressing α-Syn that had low baseline P-Ser40 levels. These data implicate PP2A activation as the mediator of P-Ser40 dephosphorylation in our α-Syn cell lines. These findings in dopaminergic cells with increased α-Syn levels confirm that (1) PP2A is more active, and (2) a dramatic increase in P-Ser40 phosphorylation is achieved by blocking PP2A activity. Additionally, the data strongly suggest that α-Syn-mediated activation of PP2A may have reduced both TH activity and DA synthesis in our earlier studies (Perez et al., 2002). Co-localization of the various PP2A subunits is required for PP2A activation and is thought to occur by interactions of the various subunits with molecules such as chaperones. An active PP2A enzyme consists of a heterotrimer of the structural A subunit, a catalytic C subunit, and a regulatory B subunit (Dobrowsky et al., 1993). The A and C subunits are ubiquitously expressed (Mayer et al., 1991) and form the catalytic complex (PP2A/C), which interacts with at least three different families of regulatory B subunits, as well as with certain tumor antigens (Mumby and Walter, 1993). The regulatory B subunits of PP2A are known to be temporally expressed during development (Csortos et al., 1996;Mayer et al., 1991;McCright and Virshup, 1995;Ruediger et al., 1991) and neuron-specific isoforms have also been identified (Mayer et al., 1991). The substrate specificity of PP2A appears to be determined by the regulatory B subunits (Cegielska et al., 1994;Csortos et al., 1996) and there is evidence that B subunits are associated with targeting the PP2A/C catalytic complexes to various intracellular sites such as microtubules (McCright et al., 1996;Sontag et al., 1995) and mitochondria (Ruvolo et al., 2002) suggesting that PP2A complexes are actively trafficked by their associated interacting proteins, one of which may be α-Syn. We have discovered that α-Syn and PP2A interact with each other in soluble fractions of brain and dopaminergic cells as measured by co-immunoprecipitation. Membrane bound PP2A in brain is reportedly less active (Sim et al., 1998), thus the interaction of PP2A with α-Syn within the cytosol may serve to stimulate PP2A activity. This interaction of α-Syn with PP2A may affect PP2A conformation or trafficking and subsequently contribute to its activation. α-Syn oxidative modification or aggregation occurs in neurodegenerative diseases such as Alzheimer's disease, a condition in which PP2A is also implicated (Trojanowski and Lee, 1995;Zhao et al., 2003). Based on our novel findings these are the first data to identify an association between α-Syn and PP2A that affects PP2A activity and may contribute to neuronal homeostasis, which if disrupted may be detrimental. The impact of α-Syn on TH P-Ser40 may involve other means of regulating the phosphorylation on this site. For example, α-Syn can directly interact with 14-3-3 (Ostrerova et al., 1999) and with TH (Perez et al., 2002). It is known that 14-3-3 can activate TH (Ichimura et al., 1988) by 14-3-3 binding first to TH Ser19 then to TH Ser40 (Kleppe et al., 2001). There is additional evidence that the binding of 14-3-3 to TH phosphorylated on Ser19 and Ser40 stabilizes its conformation and enhances TH activity (Bevilaqua et al., 2001). An interaction of α-Syn with 14-3-3 may contribute to dissociation of 14-3-3 from TH to permit PP2A physical access to the Ser40 site with subsequent effects on TH phosphorylation and DA synthesis. Further studies are required to identify the precise manner by which α-Syn acts to stimulate PP2A activity. However, regardless of how it does so, we provide novel evidence that α-Syn interacts with and contributes to the activation of PP2A, a major brain phosphatase. Our findings also underscore the importance of further elucidating normal α-Syn function because (1) many substrates require PP2A for dephosphorylation, and (2) α-Syn is implicated in multiple synucleinopathies.
5,647.6
2005-08-01T00:00:00.000
[ "Biology", "Medicine" ]
Normalization of Web of Science Institution Names Based on Deep Learning : Academic evaluation is a process of assessing and measuring researchers, institutions, or Introduction Academic evaluation is a process of assessing and measuring researchers, institutions, or disciplinary fields.Its purpose is to evaluate their contributions and impact in the academic community, as well as determine their reputation and status within specific disciplinary domains.It is an important factor in government decision-making and resource allocation.As the quantity and quality of publications serve as significant indicators for academic evaluation, Web of Science (WOS), one of the world's most renowned academic citation databases, is commonly used for academic research evaluation [1], ranking [2], and comparison [3] by researchers and academic institutions.In addition, the results of an academic evaluation affect the analysis of university education.For example, Laura [4] analyzed 17 communication and journalism courses from eight of Europe's highest-ranked universities in the field of communication based on the QS World University Rankings to assess the university's educational program. However, according to a large-scale analysis conducted by Huang [5], the lists provided in WOS's Essential Science Indicators (ESIs) are not as reliable and accurate as one might expect.Approximately 25% of author names (consisting of the initials of their first name and last name) are shared by at least two different individuals.When explicit data are not provided by authors or publishers, data aggregators such as WOS or Scopus find it challenging to provide accurate or statistically reliable data.This issue is commonly referred to as the name ambiguity problem and can be divided into two parts: the one person, multiple names problem (where one author entity is associated with multiple name variants in different publications) and the one name, multiple persons problem (where one author name corresponds to multiple different author entities).Institutional information serves as an identity marker for authors in the literature.Research has shown that the probability of homonyms in secondary institutions is very low [6,7].One approach to identifying homonymous author entities is by extracting primary and secondary institution names from addresses using patterns, such as comma separators and "university, department, laboratory". However, research by Falahati [8] revealed that out of 84 universities in Iran, there are 1668 name variants in WOS, primarily stemming from abbreviations, spelling errors, spatial variations, syntactic arrangements, and vowel/consonant and vowel/consonant combinations, with spelling errors accounting for 34.57% of the variants.Confronted with a vast number of non-standardized institution entities, there are cases of the mislabeling and underlabeling of institution data in the institution lists of the ESIs and InCites (mislabeling refers to indexing an address belonging to institution A as institution B, while underlabeling occurs when an address belonging to an institution is not indexed under that institution).Due to reasons such as author spelling errors, transcription errors in systems, translation issues, variations in institution and department names, and the use of informal names or abbreviations, the same institution may have multiple different representations (Table 1), or an institution entity may be transcribed as another institution entity.Scholars have conducted extensive research on the institution name synonym recognition task (Table 2).In the early stages of research, scholars extensively explored the similarity of institution names from both character and word perspectives using methods, such as edit distance [9] and Jaccard similarity [10].However, institutions with low literal similarity may refer to the same entity, such as "Chinese Academy of Science" and "CAS" (full name and abbreviation) or "Chinese Academy of Science" and "Chinese Acad Sci" (full name and keyword abbreviation).Conversely, institutions with high literal similarity may be distinct entities, for example, "Fukushima Univ" and "Fukushima med Univ".To address the limitations of literal similarity, some researchers have combined author name features, address features (city/state/country names) [11], and institution name features (organizational keywords) [12,13] with string similarity algorithms to achieve better results [14,15]. Another group of researchers [16] introduced statistical approaches by applying the principles of TF-IDF and analyzing a large number of institution names.They found that high-frequency words had limited discriminative power for distinguishing institution entities.To overcome this, they assigned different weights to words in institution names based on their frequencies and used the weighted average of different words in addresses to determine whether they referred to the same institution.Some scholars have adopted entity linking methods in an attempt to link institution names to external knowledge bases.Initially, researchers used proprietary databases from governments or institutions [17].However, these private databases were often small in scale and not publicly accessible, limited to disambiguating institutions within specific regions or fields.With the development of publicly available institution knowledge bases and big data technologies, recent studies have focused on constructing standardized models for institution names.These models link institution entities in bibliographic records to multiple-source institution identifiers [18][19][20], such as Wikidata, GRID, ISNI, Ringgold, ROR, etc. With the advancement of deep learning, the advantages of automatically learning features from limited annotated data have been widely applied in the field of entity dis-ambiguation.Currently, deep learning methods are less commonly used in institution disambiguation research on bibliographic data, with the predominant use of word vectorbased approaches.These methods utilize word vector models such as Word2Vec, GloVe, and BERT to learn the semantic relationships of institution names and combine clustering, rules, or string-based methods to identify the form similarity, variants, and abbreviations of institution names [21].Require a significant amount of annotated data; training and fine-tuning the models can be complex. Currently, there are two main issues in institution name standardization using deep learning methods: 1. Feature Extraction and Fusion: Institution data features can be categorized into two main types: text features and semantic relationship features.Text features primarily measure the literal similarity of institution names, which are effective in identifying names that are similar in their literal form.However, they may perform poorly in handling institution aliases and abbreviations.On the other hand, semantic relationship features focus on analyzing co-occurrence relationships and hierarchical similarity between institutions, which can better identify aliases and abbreviations.However, they may sometimes incorrectly merge structurally similar but distinct institutions.The current research often employs techniques such as term frequency, TF-IDF, string similarity, and the longest common substring to extract text features and deep learning models such as Word2Vec to extract semantic features.These features are then combined through rules or weighted fusion.This approach separates the association between text and semantic features and introduces uncertainty and subjectivity in feature combination and fusion weight allocation.2. Utilizing Multiple Contextual Information of Institution Entities: The current research often relies on single-context matching, where only the most similar context containing the institution entity is considered during the institution matching process.This approach fails to fully leverage the multiple contextual information that an institution may appear in, thereby limiting the recognition accuracy. To address these issues, this study proposes a synonym relationship recognition model that integrates multi-granularity features and multiple contextual information.The model combines Char-CNN and Word2Vec techniques to extract text and semantic features of institution entities and efficiently fuses different features using a Highway network.The model also utilizes BiLSTM combined with a multi-context matching layer to integrate the performance of institution entities in different texts, resulting in a comprehensive entity representation.Finally, the model uses cosine similarity to calculate the similarity between institutions, enabling accurate synonym relationship recognition.This multidimensional feature fusion approach effectively improves recognition accuracy and is suitable for handling complex institution name variants and structures.This paper's contributions can be summarized as follows: • Addressing the deficiencies in feature extraction and fusion for institution name standardization: This paper proposes the construction of an embedding layer that extracts and fuses two types of features.There may exist correlations and dependencies between different feature categories.By extracting features from different categories within a unified model, the model can share learned knowledge and representations, thereby improving generalization and effectiveness. • Solving the issue of underutilizing multiple contextual information of institution entities: This paper introduces a method based on bidirectional matching and multicontext fusion.This approach effectively leverages the multiple contexts in which institution entities may appear.By considering and integrating information from different contexts, the model achieves a more comprehensive understanding of institution entities, leading to improved accuracy in recognition. These contributions aim to enhance the performance and robustness of the institution name standardization task by improving feature extraction, fusion, and the utilization of contextual information. This paper is structured as follows: Section 2 summarizes the relevant work.Section 3 describes our approach, including the individual modules of the institutional synonymous recognition model.Next, Section 4 will report on the experiments and results.Section 5 summarizes and discusses future work. Related Works Based on the existing literature, scholars from both domestic and international contexts have conducted extensive theoretical and practical research on institution name synonymous recognition and standardization. In the field of synonym recognition for institution names, various methods have been employed.The representative methods include the following: 1. String similarity-based methods: Common algorithms, such as the edit distance, Jaccard coefficient, and TF-IDF, are used to measure the similarity between institution names.The edit distance represents the minimum number of edit operations (insertion, deletion, or substitution) required to transform one string into another.French [9] proposed the relative edit distance, which uses the edit distance divided by the minimum length of the two institution names to measure the similarity.To address syntactic variations in institution names, French also introduced the word-based edit distance, which splits institution names into words and calculates the edit distance based on approximate word matching.2. Statistical-based methods: These methods leverage the statistical characteristics of institution name occurrences, such as word frequency, co-occurrence relationships, and contextual features, to differentiate between different institutions.Onodera [22] assigned different weights to words based on their frequency and measured the similarity between two institution names by summing the weights of matching words. Jiang [16] proposed a clustering method using the Normalized Compression Distance (NCD) to match institution documents.The NCD utilizes data compression techniques to measure the similarity between two texts, assuming that if two texts are semantically similar, their compressed representations should exhibit high redundancy and similarity.Cuxac [23] obtain distributed vectors containing rich semantic information from raw data.These vectors are then used in subsequent deep learning models or for vector similarity comparison.Sun [24] applied the Word2Vec word embedding model to semantically learn the SCI address field and disambiguate institution names based on the similarity of institution word vectors.Chen et al. [21] utilized the GloVe model to learn institution vector representations and applied DBSCAN clustering to institution names based on vector similarity and matching rules. In WOS, the characteristics of institutional data are divided into two broad categories: textual features and semantic relational features.The text feature method mainly compares the literal similarity of institution names and uses techniques such as word frequency, TF-IDF, string similarity, and the longest common substring to judge the similarity between institutions.This method is effective in identifying literally similar organization names, but it does not perform well when dealing with aliases and abbreviations of institutions.In contrast, the semantic relationship feature focuses on the analysis of co-occurrence and hierarchical similarity between institutions, and can better identify aliases and abbreviations, but sometimes mistakenly groups together structurally similar but substantially different institutions. In order to improve the recognition effect of synonymous relations, the method of combining these two features is particularly important.Through the manual observation and weighted fusion of these features, the key information that is conducive to distinguishing institutions can be extracted in a targeted manner.However, this method has some uncertainty and subjectivity in constructing feature combinations and assigning fusion weights.In addition, the current research often relies on single-context matching, that is, only the most similar affiliation strings containing institutional entities are considered in the institution matching process, and the multiple contextual information that may occur in institutions is not fully utilized.This limits the recognition accuracy.Therefore, this chapter proposes a synonymous relationship recognition model that integrates multi-granularity features and multi-context information. Overview of the Proposed Model Kim [25] conducted research indicating that the use of subword features, such as stems and affixes, can effectively identify the abbreviated forms of words.This approach reveals the potential of subword features in capturing the microstructure of language, particularly in the recognition of abbreviations and contractions, where it demonstrates high performance.Based on this finding, we have chosen to employ Char-CNN to extract character-level features from institution names.Char-CNN allows for the in-depth analysis of the internal structure of words, enabling the identification and learning of specific character sequences or combinations.This capability proves particularly effective in handling spelling errors, abbreviations, and domain-specific language.By incorporating Char-CNN in our approach, we not only enhance the model's ability to perceive subtle textual differences but also improve its robustness when dealing with anomalous text. Word2Vec is capable of capturing the semantic similarity between words, but it does not capture the importance and distribution of words within a document collection.Therefore, we utilize Word2Vec in combination with TF-IDF to obtain word-level features.In this paper, we employ Highway networks [26] to integrate character-level and word-level features.This network structure effectively controls the flow of information between different features through its gating mechanism. Specifically, the sigmoid function in Highway networks determines the proportion of information flow between character-level and word-level embeddings, while the fully connected layer appropriately transforms and adjusts the passed information.This approach allows for the model to flexibly integrate text features at both the character and word levels, fully leveraging the fine-grained information from character-level features and the semantic richness of word-level features.As a result, it enhances the accuracy and robustness of institution name recognition and matching. In the task of institution synonym recognition, understanding and utilizing the hierarchical relationships of institutions are crucial for accurately determining whether two institutions refer to the same entity.To address this, we employ BiLSTM (Bidirectional Long Short-Term Memory) to aggregate contextual semantic information.BiLSTM is effective in capturing both preceding and succeeding contextual details, including semantic and syntactic information, thus providing a comprehensive semantic understanding. Furthermore, considering the ambiguity and fuzziness inherent in natural language processing, our model incorporates multi-context matching techniques.By analyzing and comparing the relationships between different contexts, the model enhances its ability to capture semantic information.Multi-context matching allows for the model to automatically determine which contexts are more critical for interpreting the semantics in a sentence by learning the matching relationships and corresponding weights between different contexts.This approach not only improves the model's expressive power but also enhances its robustness and accuracy when dealing with semantic complexity. Ultimately, by calculating the cosine similarity of the fused feature vectors, the model is able to determine whether two institution entities are synonymous.The architecture of the model is illustrated in Figure 1. Address Retriever For a candidate entity e l in the entity set E and its formal name O, the address retriever retrieves the most similar address segments from the corpus D where the entity appears.The retrieved addresses of e are represented as a set A = {a 1 , a 2 , ..., a p }, where p is the number of address segments. Multi-Granularity Feature Embedding Layer Character-level Feature Extraction: Let c be the vocabulary of characters and d be the dimensionality of character embeddings.For each word a, its character sequence is denoted as (c 1 , c 2 , ..., c l ), where l is the length of word a.The vector matrix representation of word a is denoted as C a ∈ R d×l .We use a convolution between C a and multiple filters (or kernels) H ∈ R d×w of width w.Char-CNN does the following: where C a [i : i + w − 1] represents the i to (i + w − 1)th column of matrix C a , and ⟨A, B⟩ is the Frobenius inner product.Filters essentially extract n-gram character sequences from words, where the size of the n-gram corresponds to the width of the filter.This represents taking the maximum value, which is used to capture the most important features for a filter.For a word, this study employs a total of m convolutional filters.The structure of Char-CNN is illustrated in Figure 2 below.Word Embedding: In this study, the pre-trained Word2Vec and TF-IDF are utilized to obtain semantic embeddings for each word, with a length of n.We concatenate the character feature embeddings with the word semantic embeddings and denote the resulting representation as y a = ⌈y a 1 , ..., y a n+m ⌉, y ∈ R n+m .For y a , this study utilizes a Highway network to adjust the relative contributions of word semantic embeddings and character feature embeddings, thereby obtaining a more effective word representation.The Highway network employs a gating mechanism to control the flow and transformation of information, which is represented by the following equation: Let W represent the weight matrix and b denote the bias.The function g is a non-linear activation function, which can be either ReLU or Tanh.g(W H y + b H ) is responsible for modifying the input data, allowing for the network to adaptively choose the extent of the transformation applied to the input data, thereby enhancing the network's expressive power and adaptability.t represents the transformation gate, which determines the amount of information to be transmitted to the next step or bypassed entirely.It serves as a control mechanism for regulating the flow of information and adjusting the relative contribution of the input data.The structure of it is shown in Figure 3 below.Contextual Embedding: LSTM (Long Short-Term Memory) is a variant of recurrent neural networks (RNNs) that plays a crucial role in contextual encoding.It is capable of modeling sequential dependencies, storing and transmitting contextual information, handling variable-length sequences, and providing rich representational capacity.Therefore, we employ LSTM to encode the contextual information of entity mentions in organization names.To take into account the position of entities in the context, we utilize two LSTMs to encode both the forward and backward directions and halt after encountering the entity word beyond the context: where t e represents the positional index of entity e in the context and h e ∈ R 1×d HE , and d HE = f orward_hidden_size + backward_hidden_size. The Multi-Context Fusion Layer Based on Bidirectional Matching For two institutional entities to be confirmed as the same entity H and G, the context can be expressed as H = {h 1 , h 2 , h 3 , ..., h p } and G = {g 1 , g 2 , g 3 , ..., g q }; p and q are the number of contexts.To determine whether Entity H and Entity G refer to the same entity, we go beyond considering a single context and instead consider multiple contexts.We evaluate the similarity between Entity H and Entity G by comprehensively considering the information from multiple contexts, H and G. The influence of different contexts in determining whether two entities refer to the same entity may vary [26].For a given context h p , we calculate the influence weight score of a p = max(sim(h p , G)), where sim(h p , G) represents the similarity between h p and the q contexts of Entity G, and max selects the highest similarity value.The underlying idea is that for institution entities, there might be multiple address information associated with the same entity.However, the matching between two institution entities is often dominated by the most matching addresses between them.An influential context, represented by h p , is likely to be highly similar to one of the addresses and less similar to the rest of the addresses.Therefore, the influence of a context on the similarity weight between the two entities should be determined by the context that is most similar to the corresponding address in the other entity. For each h p in H and g q in G, the matching score a p and a q is calculated from: The matching score matrix S can be obtained by taking softmax on the S HG over a certain axis (over 0-axis for S H→G and 1-axis for S H←G ).For each piece of encoded context, say h p for the entity H, we use the highest matched score with its counterpart as the relative informativeness score of h p to H: We further aggregate multiple pieces of encoded contexts for each entity to a global context based on the relative informativeness scores: h and g are the final context embeddings for entities H and G. Training Objectives Our training objective is to enable the model to identify whether two given entity names belonging to different institutions refer to the same entity.To accomplish this objective, we utilize the Siamese loss function: Y represents the label value, which includes two cases for the loss function: L + (e, k) when entities H and G are synonymous institution entities, and L − (e, k) when entities H and G are not synonymous institution entities. s(•) is a similarity function, such as the cosine similarity, and m is the margin value that represents the desired minimum distance between dissimilar input pairs.L + (e, k) is within the range [0, 1], where higher similarity scores correspond to lower values.For the loss L − (e, k), when s(h, g) is less than the margin value m, it remains zero; otherwise, it increases with the increase of s(h, g). Experiments 4.1. Evaluation Metrics To evaluate the performance of our method, we adopted the precision, recall, and F1 score as the evaluation metrics.The accuracy, precision, recall, and F 1 − score are calculated using the formulas as shown below: TP (True Positive) refers to the number of positive instances correctly predicted as positive.FN (False Negative) refers to the number of positive instances incorrectly predicted as negative.FP (False Positive) refers to the number of negative instances incorrectly predicted as positive. Datasets We employed entity linking techniques to link entities in Wikidata with entities in the Web of Science (WOS) dataset for institutions with a publication count greater than 1000 [27].The official name of an institution was used as the anchor sample, while the most frequent alias appearing in WOS was considered as the positive sample.Additionally, we selected institutions with the most similar names but representing different entities as the negative sample.Among the 5902 institutions in WOS with a publication count greater than 1000, we successfully linked 3572 institution entities to the institutional knowledge base.Out of these, 1494 institution entities had aliases linked to the knowledge base.Eventually, we obtained 1494 positive and negative pairs, from which we selected 2800 pairs as the final dataset for institution synonym relationships, as shown in Table 3. Baselines To compare the performance of our proposed method, we selected four other methods as benchmark approaches.The first two methods are classical approaches for institution synonym recognition, while the latter two are classical models for synonym recognition in general. 1. Huang's Method [5]: This method is considered representative in rule-based institution synonym recognition due to its emphasis on knowledge and rule completeness and generality.In the following sections, we refer to this method as "Huang's method" for simplicity.2. Word2vec [28]: This method is commonly used in deep learning-based institution synonym recognition and serves as a baseline model in our comparison.3. SRN [29]: SRN is a character-level model that encodes entities as a sequence of characters using BiLSTM.The hidden states are averaged to obtain an entity representation, and cosine similarity is used in the training objective.4. MaLSTM [30]: MaLSTM is a word-level model that takes word sequences as input. Unlike SRN, which uses BiLSTM, MaLSTM employs unidirectional LSTM and utilizes the Euclidean norm to measure the distance between two entities. Results In order to demonstrate the superiority and effectiveness of the proposed model in institution synonym recognition, we conducted comparative experiments and ablation experiments with the four aforementioned models on our custom dataset.The results of these experiments are presented in Table 4, as shown below.From the upper part of Table 4, we can see that our models consistently outperform the baseline in accuracy and recall and are lower than Word2vec in terms of recall.SRN had the worst overall performance.Our model is 9.44% better in F1 than the best baseline model.To study the contribution of different modules of our model for synonym discovery, we also report the ablation test results in the lower part of Table 4.The Highway contributes 1.49% improvement in F1, Char-CNN contributes 4.38% improvement in F1, Word2vec contributes 14.54% improvement in F1, and bidirectional matching contributes 10.89% improvement in F1. Results Analysis From the experimental results, it can be observed that deep learning models based on single-context matching performed poorly in this data environment.This can be attributed to two main issues: 1. Dependency on a single context: These models rely solely on single-context information, which makes them susceptible to absorbing excessive noise during the learning process and limits their ability to fully utilize additional information provided by other relevant contexts.This approach struggles to effectively differentiate between complex scenarios with multiple similar institution names.2. Emphasis on sentence encoding: These models tend to use the encoding of the entire sentence as the final embedding output, without specifically highlighting the importance of the institution entity itself.For institution synonym recognition, the focus should be on the specific encoding of the institution entity rather than generic information from the entire sentence. Furthermore, while the Word2vec model demonstrates high recall in such tasks, its precision is limited.This may be because it tends to generalize semantically similar institutions (such as similar departments in different universities) into the same category, leading to a lack of precision. Comparing the results of the models without Char-CNN and without Word2vec, using word-level features alone outperforms using character-level features alone.However, combining both types of features yields even better results, as it can identify some characterlevel misspellings.Compared to directly concatenating Char-CNN and Word2vec, using a Highway network has a better effect, indicating that this component positively influences the model's performance.The Highway network better integrates the two types of features, providing the model with better feature representation capabilities.Comparing the model without the bidirectional matching layer and the complete model, the bidirectional matching method effectively utilizes the importance of different contexts in institution name matching.It can leverage information from multiple contexts to enhance the overall performance of the model, significantly improving institution synonym recognition. Error Analysis An error analysis is critical for understanding the model shortcomings, thereby contributing to the in-depth analysis of and improvement in the model.We analyzed the data and then observed the error types and causes of errors.The error analysis results are shown in Table 5. As mentioned in Section 3, deep learning-based models perform well overall, but there are still some problems, and the follow-up work will focus on the above three aspects.In addition to that, because we choose institutions with over 1000 publications, the number of institutional contexts in the dataset is higher than in the database.This is shown in Figure 4; the higher the number of contexts, the better the model performance, so the actual performance of the model may be slightly lower than estimated. Hyperparameters To investigate the impact of different hyperparameters on the experimental results, we trained the proposed model using the following parameter configurations, as shown in Table 6.We varied the number of randomly sampled contexts per entity from 1 to 20 and the maximum context length from 5 to 20.For Char-CNN, we changed the number and size of the convolutional filters.The margin value (m) in the loss function was varied from 0 to 0.8.We also experimented with different optimizers during training.Figure 4 depicts the overall trend of the F1 score increasing as the number of contexts increases, indicating that the model generally performs better with more context information.This aligns with expectations, as having more context information allows for better differentiation between two institution names as the same or different entities.However, it can be observed from the graph that all metrics decrease when the number of contexts is five.Upon analyzing the data, this is mainly attributed to the imbalance in the number of contexts for institution names.Some institutions have insufficient contexts to meet the specified number, causing the model to overly rely on features from institution names with an adequate number of contexts and neglect those with fewer contexts.When the maximum context length is set to 15, the model achieves the best F1 score.This is because longer contexts may introduce noise, while shorter contexts may provide less information. As the margin value (m) increases, the F1 score, precision, and recall all show a decreasing trend.This indicates that learning from negative examples is more important for institution synonym recognition compared to positive examples. Conclusions To address the limitations of existing matching models in the domain of institution synonym recognition, a novel institution synonym recognition model was proposed, incorporating multiple feature dimensions.However, it is worth noting that the institution synonym recognition model has certain limitations.For instance, its performance may be influenced by context length and context number, and further investigation is needed to assess its effectiveness in such scenarios.Despite these limitations, our model has shown promising results in significantly improving institution synonym recognition performance and addressing the shortcomings in the related research in the field of deep learning.In the future, our work will focus on exploring the interpretability of deep learning models, the construction of datasets from different databases, and the practical impact of improvements in the field of academic evaluation. Figure 2 . Figure 2. The structure of Char-CNN. Figure 3 . Figure 3.The structure of the Highway network. Table 1 . Different expressions of the name of the institution. Table 2 . Methods for institution name synonym recognition. [5]ressed naming ambiguities, spelling errors, OCR errors, abbreviations, and omissions by employing two strategies: one utilizing a Naive Bayes model when training data are available and the other employing a semi-supervised approach combining soft clustering and Bayesian learning when no learning resources are present.3.Rule-based methods: These methods involve constructing rule libraries based on features derived from institution names (e.g., string similarity, substrings, word length, word order, and institution type) and additional features from the literature data (e.g., country, city, postal code, and author names) to merge institution name matches using feature-based rules.Huang[5]proposed a rule-based and edit distance-based approach for institution name standardization.They first constructed an institutionauthor table and used the author, country, postal code, and other features for potential institution name matching.Then, they calculated similarity by combining the Jaccard word similarity, substring matching, and the edit distance to identify institution name variants.Researchers from Bielefeld University developed over 50,000 pattern matching rules utilizing features such as institution the name, start and end dates, URL, postal code, sectors (name, URL, and sub-classification), and relationships between institutions to disambiguate the author addresses in WOS and Scopus. [19]ntity linking-based methods: These methods resolve ambiguity by linking institution names in the literature to corresponding institutions in knowledge bases.Shao[20]proposed the ELAD framework, which utilizes knowledge graphs for entity linking, generating a candidate set of institution entities, and then selecting the most probable institution entity based on string similarity.Wang[19]introduced a framework that utilizes open data resources to assist institution name standardization and attribute enrichment.It involves normalizing institution names and enriching attributes using open data resources, constructing a data linking model for multidimensional attribute alignment, and proposing a dynamic management approach for open data. 5. Deep learning-based methods: These methods utilize word embedding models to Table 5 . Error analysis results.
7,226.8
2024-07-14T00:00:00.000
[ "Computer Science" ]
Influence of the Crystal Forms of Calcium Carbonate on the Preparation and Characteristics of Indigo Carmine-Calcium Carbonate Lake In this study, indigo carmine (IC)-calcium carbonate lakes with different crystalline forms of calcium carbonate were prepared through co-precipitation methods, and the properties of these lakes and their formation mechanisms were investigated. The results showed that amorphous calcium carbonate (ACC) exhibited the smallest particle size and the largest specific surface area, resulting in the highest adsorption efficiency. Vaterite, calcite, and aragonite followed after ACC in decreasing order of adsorption efficiency. Kinetic analysis and isothermal analysis revealed the occurrence of chemisorption and multilayer adsorption during formation of the lakes. The FTIR and Raman spectra suggested participation of sulfonic acid groups in chemisorption. Appearance of IC significantly altered TGA curves by changing weight loss rate before decomposition of calcium carbonate. EDS analysis revealed the adsorption of IC predominantly happened on the surface of calcium carbonate particles rather than the interior. Introduction Indigo, one of the oldest known pigments [1], has limited solubility in water.To address this limitation, indigo carmine (IC), a sulfonic acid derivative of indigo, has found widespread use in industries such as food, medicine, printing, and dyeing [2].IC, also recognized as Food Blue No.1, Food Cyan No.2, or simply Food Blue, is notable for its high solubility in water.It retains the characteristic blue color of indigo, rendering it a stable non-azo colorant [3][4][5].Colorant lakes containing water-soluble colorants like IC enhance stability and dyeing.Traditionally, aluminum hydroxide is the substrate for providing food-grade colorant lakes [6].However, aluminum has been linked to diseases such as osteochondrosis and neurological disorders in humans.When aluminum is injected directly into the brains of animals or accidentally enters the human brain through, for example, dialysis, it can be neurotoxic, leading to the neurological syndromes of dialysis encephalopathy or dialysis dementia.Cognitive and other neurological deficits may exist if groups are occupationally exposed to high concentrations of aluminum dust [7,8].Reducing aluminum intake from food sources has become a recent focus in the food industry and academia.Our team suggests calcium carbonate-based colorant lakes as substitutes of aluminum hydroxide ones [9]. Calcium carbonate is a common food additive with a wide range of application for supplements, coloring, bulking, and antacids [10][11][12].Calcium carbonate solid has one amorphous form and three crystal forms, including calcite, aragonite, and vaterite.Amorphous calcium carbonate (ACC) is the least stable form of calcium carbonate but has the highest surface area, which renders its high adsorption capacity.Synthetic calcium carbonate mixture often yields a more stable crystalline form.Our team has successfully developed a Monascus pigments (MPs)-calcium carbonate lake.This lake demonstrates significantly enhanced light stability compared to pure MPs [13]. In this study, the initial preparation of calcium carbonate colorant lakes with varied crystalline forms was accomplished through co-precipitation, marking a significant advancement.The fundamental properties of ACC, the three polymorphs, and their colorant lakes, including zeta potential, particle size, BET-specific surface area, and color stability were investigated.Then, scanning electron microscope (SEM) and X-ray diffraction (XRD) techniques were used to analyze the micro-morphological characteristics of calcium carbonate and lakes.Next, the formation of IC-calcium carbonate complexes was investigated through kinetic and isothermal adsorption analysis.Moreover, the state of IC and calcium carbonate within the lakes and their interaction were investigated using Fourier-infrared spectroscopy (FTIR), Raman spectroscopy, thermogravimetric analysis and differential scanning calorimetry (TGA-DSC) technique.Finally, energy-dispersive X-ray spectroscopy (EDS) was utilized to determine the distribution of IC within the lake.This research shed light on the formation mechanism of calcium carbonate-based colorant lakes and suggests ways to enhance their quality. Chemicals IC (96% w/w) was purchased from Macklin Reagent Co., Ltd.(Shanghai, China) and used as received.All other chemicals, such as calcium chloride, sodium carbonate, ethanol, sodium hydroxide, and hydrochloric acid, were of analytical grade and obtained from local suppliers in China.The deionized (DI) water (~18.25 MΩ•cm) used in solution preparation in this study was obtained from the laboratory's water purification system (HYP-QX-UP, Huiyipu Ltd., Beijing, China). Preparation of Calcium Carbonate with Different Crystalline Forms The methods employed for the preparation of calcium carbonate were adapted from Nebel & Epple [14], Trushina, Bukreeva, & Antipina [15], and Zou et al. [16].However, these methods were systematically modified in a preliminary investigation with the objective of optimizing the purity of each calcium carbonate. 1. Calcite: A mixture of 150 mL of 0.2 mol/L CaCl 2 and 150 mL of 0.2 mol/L Na 2 CO 3 was stirred at 600 r/min at ambient temperature for 30 min.The resulting solution was centrifuged (5000 r/min for 5 min) to obtain sediment.The sediment was then dried in an oven at 45 • C for 12 h and ground to obtain IC-calcite lake. 2. Aragonite: 150 mL of 0.2 mol/L CaCl 2 and 150 mL of 0.2 mol/L Na 2 CO 3 were preheated to 80 • C in a water bath, respectively.After rapid mixing, the solution was centrifuged (5000 r/min for 5 min) to obtain sediment.The sediment was dried in an oven at 45 • C for 12 h and ground to obtain IC-aragonite lake. 3. Vaterite: A mixture of 150 mL of 0.1 mol/L CaCl 2 (60% DI water + 40% ethanol) and 150 mL of 0.1 mol/L Na 2 CO 3 solution (60% DI water + 40% ethanol) was stirred at 600 r/min at ambient temperature for 30 min.After centrifuging the solution at 5000 r/min for 5 min to obtain sediment, the sediment was dried in an oven at 45 • C for 12 h and ground to obtain IC-vaterite lake. 4. ACC: A rapid mixture of 20 mL of 0.1 mol/L CaCl 2 solution, 20 mL of 0.1 mol/L Na 2 CO 3 solution, and 40 mL of ethanol was introduced into the filter cup of a vacuum membrane filtration set.After agitating the filter cup until many particles appeared in the reaction solution, a 250 mL of ethanol was rapidly added and vacuum-filtered.The resulting solid residue was dried in a vacuum oven at 45 • C for 12 h, and ground to obtain IC-ACC lake. Preparation of Calcium Carbonate Colorant Lakes IC was dissolved in sodium carbonate, and calcium carbonate was synthesized as in Section 2.2 to make lakes.Three colorant lakes with different IC additions were prepared for each of the four calcium carbonates.IC was added accordingly at CaCO 3 : IC ratios of 3 g:50 mg, 3 g:200 mg, and 3 g:500 mg, respectively.Specific formulations of samples are provided in Table S1. Characterization of CaCO 3 and Its Colorant Lakes SEM and EDS: the morphological characteristics of calcium carbonate and colorant lakes were examined using a scanning electron microscope (Hitachi S-4800, Hitachi, Ltd., Tokyo, Japan).For surface observation, the samples were directly sputtered with gold before observation.For interior observation, powdered samples were frozen in liquid nitrogen, ground with a mortar and pestle, and sputtered with gold.Elemental distribution on the surface and interior of the particles was analyzed using spot-scanning mode in EDS analysis. XRD: qualitative and quantitative determinations of the crystal composition in calcium carbonate and the colorant lakes were conducted.Calcium carbonate and colorant lakes were scanned using an X-ray diffractometer (Rigaku SmartLab SE, Tokyo, Japan) in fine scanning mode, with scanning speed of 2 • /min and a scanning range of 2θ from 5-90 • .Data analysis was conducted using MDI Jade 6.5. Zeta potential and particle size: surface charge of particles in samples were analyzed using a zeta potential meter.The prepared samples were dispersed in DI water, and 1 mL of the suspension was injected into a capillary zeta potential cell and measured using a Zetasizer Nano ZS90 (Malvern Instruments, Malvern, UK).The particle size of the samples was determined using a Mastersizer 2000 (Malvern Instruments, UK), with refractive indices of 1.656 for calcium carbonate and 1.33 for water.All samples underwent testing in triplicate, and the volume-weighted mean diameter, d 4,3 , was recorded. BET test (multi-point Brunauer-Emmett-Teller specific surface area test): the tests were performed using an Autosorb iQ (Quantachrome, Houston, TX, USA).The samples were adsorbed with nitrogen and subsequently degassed at a 120 • C for 6 h to remove any residual gases.The specific surface area of the samples was then determined. Light stability: the blue color value of the samples was assessed using a colorimeter (Konica Minolta CM-3610A, Tokyo, Japan).The powder of each sample was put into a small ziplock plastic bag, spread, and gently pressed to create a flat surface for testing.The b values of the samples were measured before and after 24 h of light incubation (12,000 LX) at 45 • C (BSG-300, Boxun, Shanghai, China).The change in the b* value (∆b*) was determined by subtracting the post-illumination b* value from the pre-illumination b* value.IC powders, along with samples created by directly blending IC with each form of calcium carbonate at a ratio of 3 g CaCO 3 to 500 mg IC, also underwent light stability testing for comparative analysis. Kinetic Analysis of Adsorption Co-precipitation mode: kinetic experiments were conducted for the four kinds of calcium carbonate.For each type, CaCl 2 solution and Na 2 CO 3 solution were prepared following the procedure outlined in Section 2.2.The timing started simultaneously when IC (CaCO 3 :IC = 2 g:50 mg) was dissolved in sodium carbonate solution and mixed with CaCl 2 solution (and ethanol for vaterite).The final volume after mixing was 200 mL.The reaction solutions were continuously stirred using a magnetic stirrer (600 r/min) and sampled sequentially with a disposable syringe for about 1.5 mL at predetermined time internals.Then, the samples were filtered using a 0.22 µm filter membrane.The concentration of IC in the filtrate obtained at each time point, labeled C t (mg/L), was measured at 610 nm using UV-Vis spectrometry method [17], and the amount of IC adsorbed by calcium carbonate was calculated at each time point.Then, the amount of IC adsorbed per unit mass of calcium carbonate, labeled q t (mg/g), was calculated using Equation (1), and the curve of q t versus time was plotted.Equations ( 2) and (3) show pseudo-first-and pseudo-second-order models for the kinetic curves.Finally, the adsorption ratio at each time point was calculated using Equation (4): ln q e − q t = lnq e − k 1 t (2) where IC T (mg) is the total mass of IC added in the experiment, V (L) is the volume of the reaction solution, M CaCO 3 (g) is the mass of CaCO 3 , q e (mg) is the amount of the adsorbate adsorbed per unit mass of adsorbent at equilibrium, q t (mg) is the amount of the adsorbate adsorbed per unit mass of adsorbent at time t, k 1 (min −1 ) is the pseudo-first-order rate constant and k 2 (g/(mg•min)) is the pseudo-second-order rate constant.Surface adsorption mode: a specific quantity of each of the four calcium carbonate types prepared in Section 2.2 was weighted and dispersed in the IC solution (calcium carbonate was added at a ratio of CaCO 3 :IC = 2 g:50 mg) to achieve surface adsorption.The total volume after mixing was adjusted to 200 mL.The subsequent procedures, encompassing both the collection of samples and the kinetic analysis, were carried out identically to those employed in the co-precipitation method. Isothermal Analysis of Adsorption The two reaction solutions for preparing each of the four forms of calcium carbonates were prepared according to the method described in Section 2.2.Subsequently, the two solutions were rapidly mixed into a 250 mL triangular flask and subjected to magnetic stirring at 600 r/min, with the timing initiated simultaneously.Samples were taken and filtered through a 0.22 µm membrane every 30 min.After which, more IC was added to the reaction system immediately.The concentration of IC in the filtrate, C e (mg/L), was measured using the method described in Section 2.5.The experiment was ended when the reaction solution was saturated with IC.The amount of IC adsorbed by calcium carbonate at each time point was calculated using Equation (5) to find the mass of IC adsorbed per unit mass of calcium carbonate at equilibrium, labeled q e (mg/g), and the curve of q e versus C e was plotted.The curves were fitted into the Freundlich model (Equation (6)) and the Langmuir model (Equation (7)). Due to rapid crystallization of ACC in aqueous solutions, the isothermal experiment for ACC cannot be initiated with only one calcium carbonate preparation at beginning of the test.Thus, for each addition of IC, a new ACC was prepared using the method described in Section 2.2, and the q e and C e were determined at each solution for plotting. where IC T , V and M CaCO 3 are the same as defined above, K f (mg 1−1/n •g −1 •L 1/n ) is the Freundlich constant characterizing a particular adsorption isotherm, n (dimensionless) is Freundlich constant representing adsorption intensity, q m (mg/g) is the q e for a complete monolayer and k l (L/mg) is sorption equilibrium constant. FTIR Analysis The FTIR technique was used to analyze the possible bonding force between IC and calcium carbonate in the lakes.Before testing, the samples were dried overnight in a 60 • C oven for 12 h.Pellets of IC, calcium carbonate and lakes were prepared by pressing them with potassium bromide, and these pellets were then subjected to analysis using a Nicolet IS5 spectrometer (Thermo Fisher, Waltham, MA, USA) with a wavenumber range of 400-4000 cm and a resolution of 2 cm −1 . Raman Spectra Analysis Raman spectra of IC, calcium carbonate, and lakes were obtained using a Raman spectrometer (HORIBA LabRAM, Odyssey, Job, France).The excitation wavelength employed was 532 nm, covering a test range of 0-3000 cm −1 . TGA-DSC Analysis The thermal properties of calcium carbonate and colorant lakes were analyzed using a TGA-DSC instrument (TA SDT Q600, New Castle, DE, USA) within the range of 30-800 • C. The powdered samples were placed into an alumina crucible, which was then purged with nitrogen (N 2 ).A consistent heating rate of 10 • C per minute was upheld throughout the procedure. Statistical Analysis The data were processed and graphed using Origin 2018 (v.9.5).All assays were conducted in triplicate, and the results were expressed as mean ± standard deviation (SD).Means were compared through one-way ANOVA test using IBM SPSS Statistics 26, while differences were considered significant when p < 0.05. Characterization of CaCO 3 and Colorant Lakes Figure 1 shows SEM images of the four forms of calcium carbonate and their respective lakes, with the four kinds of calcium carbonate in the first column.Calcite displays typical rhombohedral structures with cracked surfaces, resembling stacked shales, consistent with findings reported by Rodriguez-Blanco et al. [18].Aragonite is rod-shaped with varying lengths, and its surface resembles dry and cracked hair.Vaterite particles resemble grape beads, forming clusters with appearance of large cavities, as also reported by Nebel et al. [14].Compared to other forms, ACC exhibits much smaller spherical particles, aggregating to form a relatively compact structure consistent with the literature reports [19,20].The addition of IC to the forming solution of each calcium carbonate did not noticeably change their micro-morphology, except for the introduction of bar structures (in red circle) into calcite, vaterite, and ACC.As confirmed by Figure S1a, the bar structure is a typical feature of IC particle, indicating that a few IC particles could form when high concentration of IC is used to fabricate the lakes. XRD was used to analyze the crystalline composition of calcium carbonate and colorant lakes.The XRD diffractogram of IC is shown in Figure S1b, and it is confirmed that characteristic peaks of IC crystal are mainly scattered in the 2θ range from 5 to 30. Figure 2 shows that in the case of calcite, the peaks of calcite remain unchanged after adding IC to form a calcite lake, while characteristic peaks of IC emerge in the diffractogram of calcite lake.The results confirm the high purity (100%) of calcite by showing no characteristic peak of any other crystal form in its diffractogram.The diffractogram of calcite lake indicates that a crystal solid of IC formed in the lake, consistent with the bar structure marked by red circles in the SEM figure of calcite (Figure 1).Regarding aragonite, apart from its characteristic peaks, a peak belonging to calcite also appeared.In the diffractogram of aragonite lake, one peak belongs to calcite, and three peaks belongs to vaterite, indicating slight interference by IC in the crystallization of aragonite.However, no characteristic peak of IC is present in the diffractogram of aragonite lake.In the diffractogram of vaterite, a few characteristic peaks belonging to calcite appear alongside vaterite peaks, suggesting the presence of detectable amounts of calcite formed during vaterite formation.However, quantitative analysis showed that vaterite still accounted for more than 92% of the composition, still showing high purity.For ACC and ACC lake, no crystal peak is observed in their diffractogram, consistent with expectations.Similar to calcite and vaterite, characteristic peaks of IC crystal solid also appear in the diffractogram of ACC lake.The results were consistent with the observations from SEM as shown in Figure 1.The characteristic peaks of IC appeared in all colorant lakes except aragonite lake.As shown in Section 2.2, only aragonite was prepared at a high temperature (80 • C), suggesting that high temperature could be the key factor inhibiting crystal solid formation of IC.XRD was used to analyze the crystalline composition of calcium carbonate and orant lakes.The XRD diffractogram of IC is shown in Figure S1b, and it is confirmed characteristic peaks of IC crystal are mainly sca ered in the 2θ range from 5 to 30.F 2 shows that in the case of calcite, the peaks of calcite remain unchanged after addin to form a calcite lake, while characteristic peaks of IC emerge in the diffractogram o cite lake.The results confirm the high purity (100%) of calcite by showing no characte peak of any other crystal form in its diffractogram.The diffractogram of calcite lake cates that a crystal solid of IC formed in the lake, consistent with the bar structure ma by red circles in the SEM figure of calcite (Figure 1).Regarding aragonite, apart fro characteristic peaks, a peak belonging to calcite also appeared.In the diffractogra aragonite lake, one peak belongs to calcite, and three peaks belongs to vaterite, indic slight interference by IC in the crystallization of aragonite.However, no characte peak of IC is present in the diffractogram of aragonite lake.In the diffractogram of vat a few characteristic peaks belonging to calcite appear alongside vaterite peaks, sugge Calcium carbonate crystals, like all crystals, contain lattice defects.When a lattice ion is vacant or replaced by a foreign ion, the electrical neutrality is broken, generating residual charges [21].The zeta potential of all four forms of calcium carbonate was negative (Table 1).Calcite exhibited the lowest zeta potential, while ACC displayed the highest zeta potential, approaching zero, which implies that crystal calcium carbonates are more easily able to accumulate negative charges.Interestingly, the potentials of calcite, aragonite, and ACC were lower than those of their respective colorant lakes, while the potential of vaterite was higher than that of the vaterite lake.Positively charged groups (-Ca + , -Ca (OH 2 ) + ) and negatively charged groups (-CO 3 − , -CO 3 (OH 2 ) − ) exist on the surface of calcium carbonate, with zeta potential depending on the relative amounts of these positive and negative charges [13].It is reasonable to expect that the variations in lattice structures among different calcium carbonates would result in distinct charged groups on their surfaces, subsequently causing variation in zeta potential.Theoretically, added IC would bond its sulfonic acid groups (negatively charged) with positive groups on lake's surface or interior, reducing zeta particle potential.However, only vaterite and its lake were consistent with this theory, exhibiting lower zeta potential in vaterite lake.This phenomenon suggests that upon adding IC, the IC may influence the lattice structure of calcium carbonate, leading to the formation of more positively charged groups rather than solely forming electrostatic bonding between them. Foods 2024, 13, x FOR PEER REVIEW 7 of 19 were consistent with the observations from SEM as shown in Figure 1.The characteristic peaks of IC appeared in all colorant lakes except aragonite lake.As shown in Section 2.2, only aragonite was prepared at a high temperature (80 °C), suggesting that high temperature could be the key factor inhibiting crystal solid formation of IC.Calcium carbonate crystals, like all crystals, contain la ice defects.When a la ice ion is vacant or replaced by a foreign ion, the electrical neutrality is broken, generating residual charges [21].The zeta potential of all four forms of calcium carbonate was negative (Table 1).Calcite exhibited the lowest zeta potential, while ACC displayed the highest zeta potential, approaching zero, which implies that crystal calcium carbonates are more easily able to accumulate negative charges.Interestingly, the potentials of calcite, aragonite, and ACC were lower than those of their respective colorant lakes, while the potential of vaterite was higher than that of the vaterite lake.Positively charged groups (-Ca + , -Ca (OH2) + ) and negatively charged groups (-CO3 − , -CO3 (OH2) − ) exist on the surface of calcium carbonate, with zeta potential depending on the relative amounts of these positive and negative charges [13].It is reasonable to expect that the variations in la ice structures among different calcium carbonates would result in distinct charged groups on their surfaces, subsequently causing variation in zeta potential.Theoretically, added IC would bond its ACC is the precursor of calcium carbonate crystal phases, exhibiting an average particle size of approximately 368.3 nm (Table 1), which is close to the experimental results from Tobler et al. [22].The particle diameter of the vaterite lake exhibited a reduction with the incremental addition of IC, aligning with prior research where Monascus pigments similarly reduced the size of calcium carbonate particles [13].While the trend was not as evident in the aragonite and calcite lakes, a comparison between the pure calcium carbonates and their respective lakes revealed notable disparities in particle diameter.This suggests that the addition of IC has a significant impact on the particle size across all examined crystal forms.BET analysis revealed that ACC possessed a significantly larger specific surface area than the crystalline forms, laying the foundation for its superior adsorption properties [23,24].Vaterite exhibited the largest specific surface area among the crystalline forms due to its grape-bead-like structure. To evaluate the stability of the blue color in colorant lakes, ∆b* values of the colorant lakes before and after 24 h of light illumination were compared as shown in Table 1.Smaller absolute ∆b* value indicates improved stabilization.The absolute value of ∆b* for IC was greater than that of all the colorant lakes.This indicates that the colorant lakes have higher light stability than the pure IC, verifying the significance of using lake as a substitute for pigments.There was no significant difference in the ∆b* values of aragonite lake, neither in that of ACC lakes.However, significant differences are identified among the ∆b* values of vaterite lakes and calcite lakes with varying IC contents.The results indicate that the variation of IC contents does not significantly alter the light stability of aragonite lake and ACC lake, but does affect the light stability of vaterite lake and calcite lake.Among all the tested samples, the calcite lake with the lowest IC content exhibited the smallest absolute ∆b* value, confirming highest light stability.Control samples were prepared by directly mixing each form of calcium carbonate with IC at ratio of 500 mg IC per 3 g CaCO 3 , which was the same ratio used in preparing calcite lake (500 mg), aragonite lake (500 mg), vaterite lake (250 mg) and ACC lake (33.3 mg).The Aragonite +IC sample exhibited significantly lower ∆b* value compared to that of aragonite lake, Vaterite + IC sample showed no significant difference in absolute ∆b* value compared to vaterite lake, and Calcite + IC and ACC + IC samples showed significantly higher absolute ∆b* value compared to that of their corresponding lakes.In conclusion, the stability of the colorant lakes is superior to that of pure IC, which provides a robust theoretical foundation for the utilization of colorant lakes. Kinetic Adsorption Analysis During the process of precipitating calcium carbonate from calcium chloride and sodium carbonate, calcium carbonate nanoparticles firstly form in the reaction solution.Subsequently these nanoparticles aggregate to form micrometer-sized amorphous particles, which then begin to precipitate.Eventually, these amorphous particles undergo spontaneous transformation into a crystalline structure [25].The kinetic adsorption curve of IC on calcite (Figure 3a) displays a maximum q t value of 10.9 mg/g at 0 min.In the initial stage of the reaction, the curve exhibits a gradual downward trend, with fluctuations around 7 mg/g observed after 15 min.At the very beginning, calcium carbonate was in an amorphous state, thus exposing most binding sites, which resulted in the highest q t values.However, as the unstable amorphous state transformed into calcite, it released the adsorbed IC molecules back into the reaction solution, leading to a decrease in q t value.In fact, adsorption and desorption occurred simultaneously and reach equilibrium after a certain time.The kinetic curves of vaterite (Figure 3c) and ACC (Figure 3d) closely resembles that of calcite, showing a maximum q t at beginning of the curve followed by a gradual decrease until the end of the curve.This is shown more visually on the graph of adsorption ratios (Figure 3f).While calcite and vaterite had similar q t values of approximately 6 mg/g at the end of the test, ACC displayed a final q t of about 2.5 mg/g.Before crystallization, calcium carbonate existed in the form of ACC, which served as the precursor phase of crystalline forms.ACC's particles, being small and having a large specific surface area compared to the three crystal forms (Table 1), theoretically exposes more adsorption sites on their surface.However, ACC exhibited the lowest adsorption capacity in the test, showing the lowest final q t.This unexpected result could be ascribed to the utilization of much lower IC concentration in the reaction solution of ACC lake.As described in Section 2.3, a low concentration of IC was utilized in ACC forming solution to maintain the same CaCO 3 : IC levels as those used in the other three forms of calcium carbonate.However, aragonite showed quite different kinetic adsorption curve compared to the others.The kinetic adsorption curve of IC onto aragonite (Figure 3b) reveals a continuous increase in q t with time.After 60 min, the q t of aragonite exceeds that of the other three forms of calcium carbonate, reaching more than 15 mg/g.Aragonite exhibited the highest q t, among all forms of calcium carbonate.However, as shown in Table 1, aragonite exhibited a medium zeta potential, biggest diameter, and the lowest specific surface area among the four kinds of calcium carbonate, which cannot support the superiority of aragonite in adsorption of IC.As reported in literature, temperature also significantly influences adsorption process [26,27].High temperature is a necessary condition for preparing aragonite (80 • C in this study), while all the other forms of calcium carbonate were prepared at ambient temperature.A previous study from our team has confirmed that the adsorption between MPs and calcium carbonate (mostly calcite) are endothermic process [13].Since both IC and MPs are negatively charged in the co-precipitation solution, there is a high possibility that the adsorption between IC and aragonite is also endothermic reaction, which could explain the high q t of aragonite in the kinetic experiment. The pseudo-second-order model was applied to fit the four curves (Figure 3e), yielding R 2 values greater than 0.9 for calcite, vaterite, and ACC, indicating good fitting.This suggests electrostatic attraction between calcium carbonate surface and IC, and also participation of chemisorption during the adsorption process [28,29].However, for aragonite, the fitting was more consistent with pseudo-first-order model (Figure S2).This suggests that pore diffusion in the adsorbent is the decisive step in the whole adsorption process, rather than chemisorption domination. A comparative assessment of surface adsorption and co-precipitation techniques was performed on crystalline calcium carbonates, focusing on their kinetic profiles as depicted in Figure S3.Owing to its pronounced instability in aqueous environments, amorphous calcium carbonate (ACC) was excluded from this analysis.As depicted in Figure S3, for each form of calcium carbonate, the q t during the surface adsorption process is consistently lower than that observed in the co-precipitation process throughout the experiment, with the exception of the initial phase.The findings indicate that co-precipitation is markedly more effective than surface adsorption for immobilizing IC molecules.While both methods facilitate electrostatic attraction and chemisorption, co-precipitation additionally may employ a distinct mechanism-possibly a physical entanglement effect-to secure the IC molecules.that the adsorption between IC and aragonite is also endothermic reaction, which could explain the high qt of aragonite in the kinetic experiment.The pseudo-second-order model was applied to fit the four curves (Figure 3e), yielding R 2 values greater than 0.9 for calcite, vaterite, and ACC, indicating good fi ing.This suggests electrostatic a raction between calcium carbonate surface and IC, and also participation of chemisorption during the adsorption process [28,29].However, for aragonite, the fi ing was more consistent with pseudo-first-order model (Figure S2).This suggests that pore diffusion in the adsorbent is the decisive step in the whole adsorption process, rather than chemisorption domination. Isothermal Adsorption Analysis The analysis of the kinetic adsorption curve revealed that crystalline calcium carbonate exhibited stronger adsorption compared to ACC, contrary to previous studies [30].This discrepancy is attributed to the low IC concentration in ACC kinetic adsorption analysis.Thus, isothermal adsorption was conducted to reflect the maximum adsorption capacity of calcium carbonates to IC. Figure 4a-d shows the isothermal adsorption curves of IC onto the four forms of calcium carbonates.The curve of each calcium carbonate shows gradual-increase stage, plateau stage, and steep-increase stage successively.The gradualincrease stage represents the adsorption process, the plateau stage signifies the saturation of adsorption sites, and the steep-increase stage results from IC saturation in reaction solution.A study on IC adsorption onto aluminum hydroxide reported a similarly shaped isothermal adsorption curve [6].Based on the plateaus of curves in Figure 4, the maximum amount of IC absorbed (q e at plateau) is ~1100 mg/g for calcite, ~500 mg/g for aragonite, 3500 mg/g for the vaterite, and 6000 mg/g for ACC.ACC exhibited the highest q e at plateau stage, confirming its superiority in adsorption capacity, which is likely attributed to its significantly higher surface area (Table 1).The isothermal data was fi ed into the Freundlich equation (Figure 4e) and Langmuir equation (Figure 4f).The results indicate that the adsorption of IC by the four forms of calcium carbonate is more consistent with the Freundlich model, suggesting multimolecular layer adsorption [31].However, aragonite and vaterite exhibited lower R 2 values, indicating that the adsorption of the two crystal forms is not a typical multimolecular layer adsorption.The isothermal data was fitted into the Freundlich equation (Figure 4e) and Langmuir equation (Figure 4f).The results indicate that the adsorption of IC by the four forms of calcium carbonate is more consistent with the Freundlich model, suggesting multimolecular layer adsorption [31].However, aragonite and vaterite exhibited lower R 2 values, indicating that the adsorption of the two crystal forms is not a typical multimolecular layer adsorption. FTIR Analysis In Figure 5, the peaks of calcium carbonate near 710 cm −1 , 1080 cm −1 , and 1450 cm −1 are the C-O in-plane deformation vibrational peaks, the C-O stretching vibrational peaks, and the C-O antisymmetric stretching vibrational peaks, respectively [20].Additionally, the peaks near 1797 cm −1 represent the stretching vibration of C=O, while the peaks near 860 cm −1 correspond to the CO 3 2− out-of-plane deformation vibrational peaks [32].The appearance of peaks in the spectra of the colorant lake near 1640 cm −1 (except for aragonite lake) represents a C=C stretching vibration, which is not present in spectra of calcium carbonate, indicating the successful adsorption of IC onto calcium carbonate particles.Table S2 shows further details of peak assignments.The spectra of different forms of calcium carbonate exhibit slight shifts and intensity changes in characteristic peaks attributed to differences in their lattice structures.Aragonite lakes did not show characteristic peaks of IC, consistent with previous XRD results likely due to minimal IC adsorption by aragonite.Isothermal adsorption results further confirm the relatively low adsorption efficiency of aragonite.Despite negligible IC structural changes in various crystalline calcium carbonates upon IC addition, there are discernible shifts in the characteristic peaks of IC in colorant lakes.Of particular significance is the observed shift in the spectral peak of IC from 1029 cm −1 to 1034 cm −1 within the colorant lakes, indicative of the characteristic vibrational frequency associated with the sulfonic acid group [17].This peak shift corroborates the indication of chemisorption in kinetic analysis.The prominent peaks ranging from ~1418 to ~1495 cm −1 in the spectra of the four calcium carbonate samples correspond to the C-O antisymmetric stretching vibrations.Following the formation of the lakes, these peaks exhibit varying degrees of shifts, as evidenced by the altered curves in the lake spectra.S2 shows further details of peak assignments.The spectra of different forms of calcium carbonate exhibit slight shifts and intensity changes in characteristic peaks a ributed to differences in their la ice structures.Aragonite lakes did not show characteristic peaks of IC, consistent with previous XRD results likely due to minimal IC adsorption by aragonite.Isothermal adsorption results further confirm the relatively low adsorption efficiency of aragonite.Despite negligible IC structural changes in various crystalline calcium carbonates upon IC addition, there are discernible shifts in the characteristic peaks of IC in colorant lakes.Of particular significance is the observed shift in the spectral peak of IC from 1029 cm −1 to 1034 cm −1 within the colorant lakes, indicative of the characteristic vibrational frequency associated with the sulfonic acid group [17].This peak shift corroborates the indication of chemisorption in kinetic analysis.The prominent peaks ranging from ~1418 to ~1495 cm −1 in the spectra of the four calcium carbonate samples correspond to the C-O antisymmetric stretching vibrations.Following the formation of the lakes, these peaks exhibit varying degrees of shifts, as evidenced by the altered curves in the lake spectra. Raman Analysis In Raman analysis, calcium carbonate crystals display characteristic peaks around 1100 cm −1 for symmetric stretching vibration (υ1) and around 700 cm −1 for double simplicity plane internal bending mode (υ4).The Raman spectra of calcium carbonate could be segmented into two sections, namely la ice modes below 400 cm −1 (Figure S4) and internal modes above 400 cm −1 (Figure S5) [33].ACC displays typical amorphous characteristics Raman Analysis In Raman analysis, calcium carbonate crystals display characteristic peaks around 1100 cm −1 for symmetric stretching vibration (υ 1 ) and around 700 cm −1 for double simplicity plane internal bending mode (υ 4 ).The Raman spectra of calcium carbonate could be segmented into two sections, namely lattice modes below 400 cm −1 (Figure S4) and internal modes above 400 cm −1 (Figure S5) [33].ACC displays typical amorphous characteristics with broad humps in the lattice mode of the Raman spectrum, consistent with typical characteristics of amorphous solids.The obtained spectra shapes for the three crystalline forms closely match those reported by Wehrmeister et al. [34].These Raman spectral results help to confirm the predominant form of calcium carbonate in each sample, which is consistent with the conclusions based on XRD analysis. In the internal mode (Figure S5), all crystalline calcium carbonates exhibit peaks at the spectral band of υ 1 and a faintly visible spectral band of υ 4 , while only the band of υ 1 is weakly visible in the spectrum of ACC.IC displays several distinct characteristic peaks near 1346 and 1576 cm −1 .Despite the absence of IC detection in XRD and FTIR analyses for aragonite lakes, the appearance of IC's characteristic peaks in the Raman spectra confirms pigment adsorption by aragonite, demonstrating higher sensitivity of Raman spectroscopy.Perhaps the primary reason for the superior sensitivity of Raman spectroscopy over the other two techniques is its capability to precisely target the sample's surface using a microscope, with a focal point diameter that can be as small as 1 µm.Additionally, Raman spectroscopy is impervious to the intense absorption of water, thereby significantly preserving the sample's original state [35][36][37].With increase of IC concentration, the characteristic peaks of calcium carbonate in the Raman spectra of the colorant lakes weaken while the characteristic peaks of IC at 1346 and 1576 cm −1 strengthen and shift slightly in the direction of higher wavenumbers, indicative of sulfonic acid group participation in chemisorption [17].Moreover, as more pigment is added to the colorant lake, a smaller shift in the pigment peak is observed.These results suggest that chemisorption involving sulfonic acid groups is relatively stronger at lower pigment content levels. TGA-DSC Analysis Figure 6 shows the TGA-DSC curves of the four calcium carbonates and their corresponding colorant lakes.The thermogravimetric curve illustrates the sample's weight loss as temperature increases.Figure 6 illustrates that all calcium carbonate samples undergo two distinct phases of weight loss at varying rates.The first phase, predominantly due to water evaporation, continues up to around 600 • C, while the second phase, from around 600 • C to 800 • C, is attributed to the decomposition of calcium carbonate.Specifically, the first phase curves of ACC, aragonite and vaterite can be segmented further into a rapidweight-loss segment due to free water evaporation, followed by a gradual-weight-loss segment as bound water dissociates from the samples.In the TGA curve of ACC (Figure 6a), the rapid-weight-loss segment extends to approximately 200 • C, aligning closely with results reported by Schmidt et al. [38].The rapid-weight-loss segment of aragonite's curve (Figure 6e) reaches about 350 • C, and that of vaterite's (Figure 6g) reaches about 250 • C.However, calcite (Figure 6c) exhibits a nearly uniform rate of weight loss during their first phases of weight loss.TGA curves for the four types of calcium carbonate are extensively documented in the literature [38][39][40][41].While these curves are typically similar in the second stage, they exhibit notable diversity in the first stage, presenting a range of curve shapes specific to each type.The observed variations in the first stage of the TGA curves can be attributed to the disparate preparation methods employed for calcium carbonate across various laboratories.These methods may alter the micro-morphology and hydration state of the calcium carbonate, which are critical factors deciding the first stage of the curve, yet they do not affect the inherent chemical composition of calcium carbonate that determines the second stage of the curve.For all the calcium carbonates, the incorporation of IC resulted in a more pronounced weight loss before 600 • C, as indicated by the TGA curves of the lakes, particularly for those with the highest IC addition (500 mg, 250 mg or 33.3 mg).In the DSC curves, two characteristic peaks are observed across all calcium carbonate samples and their corresponding lakes, occurring at approximately 50 • C and 700 • C.These peaks are indicative of the rapid evaporation of free water and the decomposition of calcium carbonate, respectively.Within the literature, there is a reported variation in the specific temperatures corresponding to the two characteristic peaks of calcium carbonate.This variability can be attributed to the diverse methodologies and parameters utilized in the synthesis of calcium carbonate.[38,39].Notably, the sharp exothermic peak at around 350 • C in the DSC curve of ACC (Figure 6b) signifies the crystallization of ACC into calcite [38].Similarly, the exothermic peak at about 425 • C in the DSC curve of vaterite (Figure 6h) marks the phase transition from vaterite to calcite [43].The higher temperature of vaterite's exothermic peak (425 • C) compared to ACC's (350 • C) corroborates vaterite's greater thermal stability.No crystallization peak is present in the DSC curves of aragonite (Figure 6f) and calcite (Figure 6d), denoting their superior thermal stability relative to ACC and vaterite.The presence of a single exothermic peak implies homogeneity in the ACC particles [36].Comparing to curves of the pure calcium carbonates, an additional endothermic peak is observed at approximately 550 • C in curves of ACC lake, calcite lake and vaterite lake, potentially due to the thermal decomposition of IC.It is noteworthy that only the aragonite lakes lacked the exothermic peak at approximately 550 • C, which could be attributed to the lower IC concentration in aragonite lakes, arising from aragonite's reduced adsorption capacity. EDS Analysis To elucidate the distribution of IC within the colorant lakes, the sulfur element-exclusive to the pigment-was analyzed using EDS on both the surface and interior (via cross-sections) of the lake particles.Owing to the diminutive size and dense texture of ACC, cross-sections were unattainable, precluding ACC from this phase of EDS analysis. The elemental distribution on both the surfaces and cross-sections of the three crystalline forms of calcium carbonates and their corresponding lakes is depicted in Table S3, with representative examples of surfaces and cross-sections illustrated in Figure S6.Consistent with expectations, sulfur, a marker for the presence of IC, was not detected on the surfaces or cross-sections of the pure calcium carbonate samples.Notable sulfur content was observed on the surfaces of calcite lake and vaterite lake (1.80% and 1.96%, respectively), whereas aragonite surfaces exhibited no sulfur, likely reflecting aragonite's lower IC adsorption capacity.Cross-sectional analysis revealed sulfur solely in calcite lake (0.02%), suggesting the presence of IC both on the surface and within the interior of calcite lake particles, predominantly on their surface.The challenge of detecting sulfur via EDS on cross-sections stems from the difficulty in preparing suitable cross-sections and the serendipitous nature of locating such sections under SEM.In this study, out of five suitable calcite lake particle cross-sections identified, sulfur was found in only two.This suggests an uneven distribution of IC within the particles, or the presence of the pigment solely in a subset of the particles.Either scenario could result in the inability of EDS to detect the pigment.Hence, the absence of sulfur on the cross-sections of the other two calcium carbonate lake particles does not preclude the potential internal presence of the pigment.Anyhow, the findings provide compelling evidence for the existence of IC on the surface and in interior of calcite lake particles. Conclusions In this study, we synthesized various forms of calcium carbonate and effectively integrated them with IC using a co-precipitation approach to create colorant lakes.The addition of IC generally maintained the crystalline structure of calcium carbonate, with the exception of aragonite, which transitioned to exhibit a substantial vaterite presence.While the three crystalline forms of calcium carbonate produced micrometer-sized particles, ACC formed nanoparticles, resulting in the highest specific surface area.Kinetic analysis revealed that aragonite adhered to a pseudo-first-order model, whereas the other forms aligned more with a pseudo-second-order model, indicative of chemisorption.These findings were supported by FTIR and Raman spectroscopy, which verified the involvement of sulfonic acid groups in chemisorption.A comparative study of surface adsorption and co-precipitation methods showed that co-precipitation was superior for IC adsorption.Isothermal adsorption analysis demonstrated that ACC had the greatest IC loading capacity, followed in order by vaterite, calcite, and aragonite.The adsorption behavior of all four calcium carbonates was more consistent with the Freundlich isotherm model over the Langmuir model, suggesting a multilayer adsorption process.The incorporation of IC markedly altered TGA and DSC profiles of the calcium carbonates by introducing additional water content and impacting their microstructure.EDS analysis revealed that IC was primarily located on the surface of the lake particles with a minor intraparticle presence.These findings underscore the necessity for further research into the efficacy of different crystalline forms in actual food applications. Supplementary Materials: The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/foods13162607/s1,Table S1: Formulation details for preparing calcium carbonates and their colorant lakes.Table S2: The FTIR spectra peak assignments for calcium carbonates, colorant lakes and IC [20,[44][45][46][47]. Table S3: Atomic ratios of C, N, S, Ca, Na and O on surfaces and cross-sections of particles in calcium carbonates and their colorant lakes. Foods 2024 ,Figure 1 . Figure 1.SEM images of four kinds of calcium carbonates and their colorant lakes prepared addition of different amount of IC.The red circles circle the bars. Figure 1 . Figure 1.SEM images of four kinds of calcium carbonates and their colorant lakes prepared with addition of different amount of IC.The red circles circle the bars. Figure 3 . Figure 3. Kinetic curves for adsorption of IC by the four calcium carbonates using co-precipitation method ((a): Calcite; (b): Aragonite; (c): Vaterite; (d): ACC), fi ing of the adsorption data into pseudo -second order model (e) and curves of adsorption ratio change with time (f). Figure 3 . Figure 3. Kinetic curves for adsorption of IC by the four calcium carbonates using co-precipitation method ((a): Calcite; (b): Aragonite; (c): Vaterite; (d): ACC), fitting of the adsorption data into pseudo -second order model (e) and curves of adsorption ratio change with time (f). Foods 2024 , 13, x FOR PEER REVIEW 13 of 19 carbonate, indicating the successful adsorption of IC onto calcium carbonate particles.Table Figure 5 . Figure 5. Infrared spectra of IC, different forms of calcium carbonates and their colorant lakes (Red dashed box shows typical peak shift). Figure 5 . Figure 5. Infrared spectra of IC, different forms of calcium carbonates and their colorant lakes (Red dashed box shows typical peak shift). This observation may be attributed to the additional water introduced by IC or the thermal decomposition of IC, which typically commences at temperatures above 350 • C [42]. Figure 6 . Figure 6.TGA curves (a,c,e,g) and DSC curves (b,d,f,h) of the four forms of calcium carbonates and their colorant lakes.In the DSC curves, two characteristic peaks are observed across all calcium carbonate samples and their corresponding lakes, occurring at approximately 50 °C and 700 °C.These peaks are indicative of the rapid evaporation of free water and the decomposition Figure 6 . Figure 6.TGA curves (a,c,e,g) and DSC curves (b,d,f,h) of the four forms of calcium carbonates and their colorant lakes. Figure S4: Lattice mode of Raman spectra of the four forms of calcium carbonate.Figure S5: Internal mode of Raman spectra of the four forms of calcium carbonate. Figure S6: Representative surfaces and cross-sections of calcium carbonates and lakes (red dots are sampling points).Author Contributions: L.J.: Conceptualization, Methodology, Investigation, Writing, Data curation, Visualization.Y.L.: Investigation, Visualization.J.C.: Investigation, Visualization.J.M.: Investigation.D.Y.: Conceptualization, Supervision, Writing, Project administration, Funding acquisition.C.W.: Project administration.All authors have read and agreed to the published version of the manuscript.Funding: The research was funded by the National Natural Science Foundation of China, grant number 32101877.Institutional Review Board Statement: Not applicable.Informed Consent Statement: Not applicable. Table 1 . Zeta potential, average diameter, specific surface area and ∆b* of different calcium carbonates and their colorant lakes. aThe values with different superscript letters in the same column are significantly different (p < 0.05).* The samples were prepared by direct mixing of calcium carbonate and IC (CaCO 3 :IC = 3 g:500 mg).
10,796.8
2024-08-01T00:00:00.000
[ "Environmental Science", "Materials Science", "Chemistry" ]
Quantum electron motion control in dielectric : Attosecond science capitalizes on the extreme nonlinearity of strong fields, driven by few-cycle pulses, to attain attosecond temporal resolution and give access to the electron motion dynamics of matter in real-time. Here, we measured the electronic delay response of the dielectric system triggered by a strong field of few-cycle pulses to be in the order of 425 ± 98 as. Moreover, we exploited the electronic response following the strong driver field to demonstrate all-optical light field metrology with attosecond resolution. This field sampling methodology provides a direct connection between the driver field and the induced ultrafast dynamics in matter. Also, we demonstrate the quantum electron motion control in dielectric using synthesized light waveforms. This on-demand electron motion control realizes the long-anticipated ultrafast optical switches and quantum electronics. This advancement promises to increase the limiting speed of data processing and information encoding to rates that exceed 1 petabit/s, opening a new realm of information technology. demonstrated elsewhere 18 and used for the Carrier-Envelope-Phase (CEP) and the waveform detection of the driver's few-cycle pulses 18,20-25. Also, the strongfield interaction with thin films of SiO 2 dielectric has been utilized to generate a wideband coherent EUV radiation extended up to 40 eV 9 . Based on these studies, the strong-field-induced electron dynamics in the dielectric can be explained by electron motion in the conduction band (illustration in Figure 1a&b). In a strong field (Figure 1a), the electron with initial wave vector ( ) is moving in the reciprocal space by acquiring a time-dependent wave vector $ ( , ) from the driving field, which can be expressed by 12,16 K ) (q, t) = q + . ℏ ∫ F 2 (z, t 4 )dt 4 6 78 (1) where : ( , 4 ) is the optical field strength, and is the electron charge. Therefore, all electrons are shifted in the reciprocal space by the same wave vector ∆q(t) = . ℏ ∫ F 2 (z, t 4 )dt 4 6 78 (2) At a certain critical field strength value (Figure 1b), the shift (∆ ) becomes greater than the Brillouin zone extension k = 2π/a, causing electron Bragg reflection and Bloch oscillations. Thus, the dielectric constant ( ) and the dielectric material's optical properties are altered due to the strong polarizability. As a result, the dielectric system undergoes a semimetal-like phase transition 12,16 , and the reflectivity changes in real-time following the driver field. Hence, the dielectric time-resolved reflectivity measurement provides direct access to the induced electron motion dynamics in the system. Here, we exploited this field-driven electronic response and the related dielectric reflectivity modulation to directly measure the SiO 2 dielectric system's electronic delay response in a strong few-cycle pulse. Also, we demonstrate all-optical light field sampling metrology with attosecond resolution based on the same principle. Finally, we utilized the light field synthesis to control the electron motion in the dielectric using complex synthesized waveforms. Electronic delay response in dielectric The strong-field-induced current in the dielectric has been exploited to indirectly determine the carrier delay response by measuring the carrier injection delay time in the dielectric nanocircuit 22 Figure 2b. The traces show relative phase delays of 425 ± 98as and 575 ± 45as. Note, the driver pulse CEP is passively stabilized (phase jittering during the measurements is on the order of 100 mrad). Thus, the CEP jittering contribution to the measured relative phase delay is reflected in the merged standard deviation (SD). The measured phase delay (Figure 2b) is attributed to the electronic delay response in a strong field 22 . The delay response increases at higher driver field strengths due to the increase of the system's polarizability and the excited carrier density. We calculated the number of excitation carriers at different field strengths using the driver pulse's measured electric field. The electric field presented in Figure 2c is retrieved from the derivative of the measured reflectivity modulation in Figure 2a, representing the driver field's vector potential, as explained in the SI Section I. Remarkably, the retrieved temporal intensity profile of the driver field Figure 2d, the excited carrier number F: ( ) is behaving as a function of the instantaneous electric field of the driver pulse (black dashed line in Figure 2d). The excited carrier's F: ( ) has maximum values at the maxima, and minimum values at the minima of the pulsed electric field. Both the maximum population (at t ≈ 2 fs) and the residual CB population (for t > 9fs) monotonically increase with the excitation field amplitude, indicating high reversibility of the excitation, which happens at the high interband coupling matrix element [16][17][18][19] . The reversible electronic dynamic response directly gives access to the triggering field of the pulse with high temporal resolution. The time-resolved measurement of the reflectivity modulation changes at different field strengths opening a direct window to the electronic response in the dielectric. All-optical light field sampling The direct connection between the dielectric system's reflectivity modulation and the incident driver field shape allows the establishment of direct-simple all-optical light field metrology. First, we conducted a numerical simulation to demonstrate the basic principle of this approach by calculating the reflectivity change and the reflected field of the SiO 2 thin substrate in the strong field of a one cycle-pulse (spans over a broadband spectrum and centered at 800 nm) at different field strengths (the calculations is explained in the SI Section IV) 13 . The reflected and incident fields are normalized, overlapped in time, and plotted in Figure S3 (SI). The reflected field exactly follows the incident field at different intensities with a maximum SD < 1.5%. This calculation proves that the dielectric reflectivity modulation due to the strong-field interaction follows the driver field waveform shape. We sampled an unknown synthesized waveform generated by four spectral channels (250-1000 nm) using a Light Field Synthesizer (LFS) apparatus mentioned above [26][27][28] (LFS is explained in the SI Section V and shown in Figure S4 and) to prove the viability of this methodology experimentally. The output beam from the LFS with an unknown waveform is divided into two separate beams as explained in the previous section (setup is shown in Figure 1c). The first high-intensity beam (pump) is used to alter the dielectric reflectivity. The pump beam's field strength is ~1.33V/Å (well below the damage threshold ~2.7 V/Å) 18, 22 . In addition, the second beam (probe) has a lower intensity (~10% of the pump beam). The probe beam spectrum is recorded as a function of the time delay between the pump and probe pulses (with delay step size=100 as). Afterward, the unknown synthesized waveform of the strong driver field The demonstrated all-optical field metrology exhibits field sampling capability with attosecond temporal resolution for a single broadband waveform spanning two octaves, which was beyond reach 17,18,22 . This approach can be used under any experimental conditions, enabling the direct connection between the triggering few femtosecond/attosecond field and the measured dynamics in potential time-resolved measurements, providing more insights into the ultrafast physics dynamics of matter. Also, this simple field sampling metrology promises a profound advancement in light field synthesis technology and the attosecond electron motion control in matter. Quantum control of electron motion in dielectric The light field-induced electron motion in the dielectric can be controlled ondemand by tailoring the driver field's shape with attosecond resolution. We with equal time intervals of 0.9 fs when utilizing these waveforms. In Figure 4EII, the electron's highest triggering signal arises at four events (illustrated by shaded red) where the four signals are separated in time. The first and second signals are separated by 0.9 fs, the second and third signals are separated by 3.6 fs, and 0.9 fs separates the third and fourth signals. The presented waveforms can be used to induce and control current signalslasting ~ 400 as (Fig 4a (II) We exploited the dielectric's strong-field interaction to determine the attosecond electronic delay response in the dielectric. Also, we demonstrate an all-optical direct-simple approach to sample the light field spanning two octaves with attosecond resolution. This field sampling approach can be implemented in different environments and experiment setups to provide a real-time connection between the ultrafast dynamics in matter and its driver field. Consequently, using this realistic sampled field in simulations, calculations, and fitting algorithms related to the measured spectroscopic response of matter provides more accurate interpretation and insight into the underlying physics of these dynamics. Moreover, we utilized synthesized waveforms to exhibit full control of electron motion in the dielectric. This electron control can be used to develop quantum electronics, paving the way to extend the frontiers of modern electronics and data information processing technologies into the petahertz realm. Method Field -induced reflectivity modulation measurement of SiO 2 dielectric: in this experiment (setup is illustrated in Figure 1c), conducted in an ambient environment, the beam of few-cycle visible (500-700 nm, centered at = 600 , and p-polarized) laser pulses is split into two beams by passing the laser through a two-hole mask. The mask is designed to have two different hole diameters (3mm and 1 mm). Therefore, the two beams that emerge through the mask have different intensities. The first beam has a high-intensity (pump beam) to induce the phase transition and alter the reflectivity of the SiO 2 substrate. The estimated pump beam's field strength is 0.78 V/Å (at a lower field strength ≤ 0.67 V/ Å; no significant reflectivity modulation signal was observed) which is lower than the damage threshold 18,22 . Note, the reflectivity signal disappears when the SiO 2 substrate is damaged, so all the presented measurements were collected at field intensity lower than the damage threshold. The second beam (probe beam) has a lower intensity (≤0.1 V/Å) than the threshold field strength required to induce any degree of phase transition in SiO 2 . The two beams deviate from the mask, are incident on two D-shape focusing mirrors (f=100 mm), and focused onto the 100 μm thick SiO 2 substrate (incident angle < 5º). An imaging system has been used to ensure perfect spatial overlapping between the two beams. One of these mirrors is attached to a piezo-stage device to control each beam's relative delay with attosecond resolution. The reflected probe beam (off the substrate's front surface) is tightly focused into an optical spectrometer entrance after propagating enough distance to be spatially isolated from the pump beam; a polarizer and a one-hole mask were introduced to filter out the pump beam. The measured probe beam spectrum (in the presence of the reflected pump beam, as shown in Figure S1) text for explanation. c, the electronic delay response and all-optical field sampling experiment setup. A mask splits the main beam into two strong (pump) and weak (probe) beams. The two beams are focused on a hundred microns dielectric (SiO2 substrate). One of these D-shape mirrors is connected to a piezo-stage to control the relative delay between the pump and probe pulses with attosecond resolution. An optical spectrometer measures the reflectivity modulation of the reflected probe beam spectrum from the substrate. Figures and Figure legends A polarizer and a one-hole mask are introduced in the probe beam path before the spectrometer to enhance the signal-to-noise ratio of the reflectivity modulation measurements.
2,622.2
2021-05-13T00:00:00.000
[ "Physics" ]
Multi-existence of multi-solitons for the supercritical nonlinear Schr\"odinger equation in one dimension For the L2 supercritical generalized Korteweg-de Vries equation, we proved in a previous article the existence and uniqueness of an N-parameter family of N-solitons. Recall that, for any N given solitons, we call N-soliton a solution of the equation which behaves as the sum of these N solitons asymptotically as time goes to infinity. In the present paper, we also construct an N-parameter family of N-solitons for the supercritical nonlinear Schr\"odinger equation, in dimension 1 for the sake of simplicity. Nevertheless, we do not obtain any classification result; but recall that, even in subcritical and critical cases, no general uniqueness result has been proved yet. Recall also that (NLS) admits the following symmetries. • Galilean invariance: if u(t, x) satisfies (NLS), then for any v 0 ∈ R, w(t, 4 t) also satisfies (NLS). We now consider solitary waves of (NLS), in other words solutions of the form u(t, x) = e ic0t Q c0 (x), where c 0 > 0 and Q c0 is solution of Recall that such positive solution of (1.1) exists and is unique up to translations, and is moreover the solution of a variational problem: we call Q c0 the solution of (1.1) which is even, and we denote Q := Q 1 . By the symmetries of (NLS), for any γ 0 , v 0 , x 0 ∈ R, is also a solitary wave of (NLS), moving on the line x = v 0 t + x 0 , that we also call soliton. Finally recall that, in the supercritical case p > 5, solitons are unstable (see [8]). A striking illustration of this fact is the following result of Duyckaerts and Roudenko [5] (adapted from a previous work of Duyckaerts and Merle [4]), obtained for the 3d focusing cubic nonlinear Schrödinger equation (NLS-3d), which is also L 2 supercritical and H 1 subcritical as in our case. Proposition 1.1 ([5]). Let A ∈ R. If t 0 = t 0 (A) > 0 is large enough, then there exists a radial solution U A ∈ C ∞ ([t 0 , +∞), H ∞ ) of (NLS-3d) such that where e 0 > 0 and Y + = 0 is in the Schwartz space S. In particular, U A (t) = e it Q if A = 0, whereas lim t→+∞ U A (t) − e it Q H 1 = 0. Note that, in the subcritical and critical cases p 5, no such special solutions U A (t) can exist, due to a variational characterization of Q. Indeed, if lim t→+∞ u(t) − e it Q H 1 = 0, then u(t) = e it Q in this case. The purpose of this paper is to extend Proposition 1.1 to multi-solitons. • In the L 2 subcritical and critical cases, i.e. for (NLS) with p 5, there exists a large literature on the problem of existence of multi-solitons and on their properties. Merle [12] first established an existence result in the critical case, as a consequence of a blow up result and the conformal invariance. This result was extended by Martel and Merle [10] to the subcritical case, using arguments developed by Martel, Merle and Tsai [11] for the stability in H 1 of solitons. Nevertheless, we recall that no general uniqueness result has been proved, contrarily to the generalized Korteweg-de Vries (gKdV) equation (see [9]). For other stability and asymptotic stability results on multi-solitons of some nonlinear Schrödinger equations, see [13,14,15]. • In the L 2 supercritical case, i.e. in a situation where solitons are known to be unstable, Côte, Martel and Merle [3] have recently proved the existence of at least one multi-soliton solution for (NLS): Recall that, with respect to [10,11], the proof of Theorem 1.2 relies on an additional topological argument to control the unstable nature of the solitons. Finally, recall that Theorem 1.2 was also obtained for the L 2 supercritical gKdV equation, and has been a crucial starting point in [2] to obtain the multi-existence and the classification of multi-solitons. It is a similar multi-existence result that we propose to prove in this paper. Main result and outline of the paper The whole paper is devoted to prove the following theorem of existence of a family of multi-solitons for the supercritical (NLS) equation. Then there exist γ > 0 and an N -parameter family (ϕ A1,...,AN ) (A1,...,AN )∈R N of solutions of (NLS) such that, for all (A 1 , . . . , A N ) ∈ R N , there exist C > 0 and t 0 > 0 such that Finally, to prove Proposition 3.1, we follow the strategy of the proof of the similar proposition in [2], except for the monotonicity property of the energy which does not hold for the (NLS) equation. If this property of monotonicity was necessary to obtain the classification, we prove that a slightly different functional estimated regardless its sign is sufficient to reach our purpose. We also rely on refinements of arguments developed in [3], in particular the topological argument to control the unstable directions. Preliminary results Notation 2.1. They are available in the whole paper. Linearized operator around a stationary soliton The linearized equation appears if one considers a solution of (NLS) close to the soliton e it Q. and the self-adjoint operators L + and L − are defined by The spectral properties of L are well-known (see [7,16] for instance), and summed up in the following proposition. 7,16]). Let σ(L) be the spectrum of the operator L defined on L 2 (R) × L 2 (R) and let σ ess (L) be its essential spectrum. Then Furthermore, e 0 and −e 0 are simple eigenvalues of L with eigenfunctions Y + and Y − = Y + which have an exponential decay at infinity. Finally, the null space of L is spanned by ∂ x Q and iQ, and as a consequence, the null space of L + is spanned by ∂ x Q and the null space of L − is spanned by Q. Remark 2.3. By standard ODE techniques, we can quantify the exponential decay of Y ± and ∂ x Y ± at infinity. In fact, there exist η 0 > 0 and C > 0 such that, for all x ∈ R, Moreover, L, L + and L − satisfy some properties of positivity or coercivity. The following proposition sums up the two properties useful for our purpose. Note that the first one is proved in [16], while the second one is proved in [4,5]. (ii) There exists κ 0 > 0 such that, for all v = v 1 + iv 2 ∈ H 1 , Finally, we extend Proposition 2.2 to the operator L c linearized around a soliton e ict Q c (x), by a simple scaling argument. In fact, we recall that if u is a solution of (NLS), then w(t, x) = λ 2 p−1 u(λ 2 t, λx) is also a solution, and moreover, we have Q c (x) = c Finally, e c and −e c are simple eigenvalues of and the null space of L c is spanned by ∂ x Q c and iQ c . Now, suppose that there exists λ ∈ R such that Y 2 = λQ. Then, we would have L − Y 2 = −e 0 Y 1 = λL − Q = 0, and so Y 1 = 0. But it would imply L + Y 1 = 0 = e 0 Y 2 , and so Y 2 = 0, which would be a contradiction. Therefore, by (i) of Proposition 2.4, we have ( Multi-solitons results A set of parameters (1.2) being given, we adopt the following notation. (iv) e j = e cj , where e c = c 3/2 e 0 . Now, to estimate interactions between solitons, we denote c min = min{c k ; k ∈ [[1, N ]]}, and the small parameters From [10], it appears that γ is a suitable parameter to quantify interactions between solitons in large time. For instance, we have, for j = k and all t 0, From the definition of σ 0 and Remark 2.3, such an inequality is also true for Y ± j . Moreover, since σ 0 has the same definition as in [3], Theorem 1.2 can be rewritten as follows. There exist T 0 ∈ R, C > 0 and ϕ ∈ C([T 0 , +∞), H 1 ) such that, for all t T 0 , (2.5) Construction of a family of multi-solitons In this section, we prove Theorem 1.3 as a consequence of the following crucial Proposition 3.1. ..,AN , and let us show that it implies For j = 1, first note that, from the construction of ϕ A1,...,AN , the hypothesis means , and moreover and so, by difference, we have Now, if we multiply this equality by Y + σ(1) (t), integrate, and take the imaginary part of it, we obtain, by Claim 2.6 and (2.4), For the inductive step from j − 1 to j, we write similarly and we finally obtain A σ(j) = A ′ σ(j) as expected, by taking the difference of these two expressions, multiplying by Y + σ(j) (t), integrating and taking the imaginary part of it. Now, the only purpose of the rest of the paper is to prove Proposition 3. . We want to construct a solution u of (NLS) such that Equation of z Since u is a solution of (NLS) and also ϕ is (and this fact is crucial for the whole proof), we get But from Corollary 2.5, we have where Y + cj ,1 = Re Y + cj and Y + cj ,2 = Im Y + cj , and so Therefore, we get the following equation for z: (3.4) By developing the nonlinearity, we find where ω(z) satisfies |ω(z)| C|z| 2 for |z| 1. Hence, we can rewrite (3.4) as where Finally, the equation of z can be written in the shorter form where ω 1 satisfies ω 1 (t) L 2 Ce −ej t for all t T 0 . We finally estimate the source term Ω in the following lemma, that we prove in Appendix A. Compactness argument assuming uniform estimates To prove Proposition 3.1, we follow the strategy of [10,3]. We first need some notation for our purpose. (iii) S R k 0 (r) denotes the sphere of radius r in R k0 . (iv) B B (r) is the closed ball of the Banach space B, centered at the origin and of radius r 0. Let S n → +∞ be an increasing sequence of time, b n = (b n,k ) k∈K ∈ R k0 be a sequence of parameters to be determined, and let u n be the solution of   Proposition 3.4. There exist n 0 0 and t 0 > 0 (independent of n) such that the following holds. For each n n 0 , there exists b n ∈ R k0 with b n 2e −(ej+2γ)Sn , and such that the solution u n of (3.7) is defined on the interval [t 0 , S n ], and satisfies Assuming this key proposition of uniform estimates, we can sketch the proof of Proposition 3.1, relying on compactness arguments developed in [10,3]. The proof of Proposition 3.4 is postponed to the next section. Sketch of the proof of Proposition 3.1 assuming Proposition 3.4. From Proposition 3.4, there exists a sequence u n (t) of solutions to (NLS), defined on [t 0 , S n ], such that the following uniform estimates hold: In particular, there exists C 0 > 0 such that u n (t 0 ) H 1 C 0 for all n n 0 . Thus, there exists u 0 ∈ H 1 (R) such that u n (t 0 ) ⇀ u 0 in H 1 weak (after passing to a subsequence). Moreover, using the compactness result [10, Lemma 2], we can suppose that u n (t 0 ) → u 0 in L 2 strong, and so in H sp strong by interpolation, where 0 s p < 1 is an exponent for which local well-posedness and continuous dependence hold, according to a result of Cazenave and Weissler [1]. Now, consider u solution of Fix t t 0 . For n large enough, we have S n > t, so u n (t) is defined and by continuous dependence of the solution of (NLS) upon the initial data, we have u n (t) → u(t) in H sp strong. By the uniform we finally obtain, by weak convergence, Thus, u is a solution of (NLS) which satisfies (3.1). Proof of Proposition 3.4 The proof proceeds in several steps. For the sake of simplicity, we will drop the index n for the rest of this section (except for S n ). As Proposition 3.4 is proved for given n, this should not be a source of confusion. Hence, we will write u for u n , z for z n , b for b n , etc. We possibly drop the first terms of the sequence S n , so that, for all n, S n is large enough for our purposes. From (3.6), the equation satisfied by z is In particular, we have Modulated final data Lemma 3.5. For n n 0 large enough, the following holds. For all a − ∈ R k0 , there exists a Proof. Consider the linear application Φ : Moreover, from (2.4), there exists C 0 > 0 independent of n such that, for l = k, Thus, by taking n 0 large enough, to conclude the proof of Lemma 3.5. Claim 3.6. The following estimates at S n hold: Equations on α ± k Let t 0 > 0 independent of n to be determined later in the proof, a − ∈ B R k 0 (e −(ej +2γ)Sn ) to be chosen, b be given by Lemma 3.5 and u be the corresponding solution of (3.7). We now define the maximal time interval [T (a − ), S n ] on which suitable exponential estimates hold. Proof. Following Notation 2.7, we compute Moreover, using the equation of z (3.8) and an integration by parts, we find for the second term Using the estimate ω 1 (t) L 2 Ce −ej t and Lemma 3.2, we find for the last term From the definition of γ (2.3), we deduce that and, as in the proof of Lemma 3.2, we also find Hence, we have Finally, if we denote z 1 = Re(ze −iθ k ) and z 2 = Im(ze −iθ k ), we find Control of the stable directions We estimate here α + k (t) for all k ∈ [ [1, N ]] and t ∈ [T (a − ), S n ]. From (3.10) and (3.9), we have Thus, |(e −e k s α + k (s)) ′ | K 2 e −(ej +e k +4γ)s , and so, by integration on [t, S n ], we get |e −e k Sn α + k (S n )− e −e k t α + k (t)| K 2 e −(ej +e k +4γ)t , which gives But from Claim 3.6 and Lemma 3.5, we have and so finally Localized Weinstein's functional We follow here the same strategy as in [11,10,3] to estimate the energy backwards. For this, we define the function ψ by Moreover, we set Observe that the functions h 1 and h 2 take values close to c k + Hence, we have φ k 0 and N k=1 φ k ≡ 1, and by an Abel's transform, we also have Proof. See Appendix A. Now, we define a quantity related to the energy for z, by 14) The following estimate of the variation of H is the main new point of this paper, and as its proof is long and technical, it is postponed to Appendix B. Thus, by integration on [t, S n ], we obtain |H(t) − H(S n )| K1 √ t e −2(ej +γ)t , and so But from Claim 3.6 and Lemma 3.5, we have and so Finally, expanding |ϕ + r j + z| and so, from the definition of H (3.14) , Using (2.5), we easily obtain (3.15) by similar techniques used in the proof of Lemma 3.2 in Appendix A to replace (ϕ + r j ) by R plus an exponentially small error term. Control of the directions of null energy Define First, note that there exist C 1 , C 2 > 0 such that Indeed, by (2.4), we have and similarly, Now, we compare the functionals H[ z] and H[z] in the following lemma, that we prove in Appendix A. Completely similarly, we find, for all k ∈ [[1, N ]], using (3.13) for k ∈ J, and (3.9) for k ∈ K. Finally, gathering all estimates from (3.18), we have proved that there exists K 0 > 0 such that, for all t ∈ [T (a − ), S n ], We want now to prove the same estimate for z, and so we have to control the parameters β k (t) and γ k (t) introduced above. Improvement of the decay of z Proof. By (3.16), it is enough to prove this estimate for |β k (t)| + |γ k (t)| with k ∈ [[1, N ]] fixed. To do this, write first the equation of z, from the equation of z (3.6), Then, multiply this equation by R k , integrate, and take the real part of it, so that we obtain, by (2.4), (2.5) and Lemma 3.2, In other words, we have, by (3.16) and (3.9), Moreover, from = Im ∂ t zR k + Im z∂ t R k , and so, as Gathering previous estimates, we find Completely similarly, if we multiply the equation on z by ∂ x Q c k (λ k )e −iθ k , integrate and take the imaginary part of it, we find Hence, we have proved that there exist C 3 , C 4 > 0 such that, for all t ∈ [T (a − ), S n ], Finally, if we choose t 0 large enough so that By integration on [t, S n ], we get |β k (t)| + |γ k (t)| |β k (S n )| + |γ k (S n )| + C t 1/4 e −(ej +γ)t . But from Claim 3.6, Lemma 3.5 and (3.16), we have and so finally, ∀t ∈ [T (a − ), S n ], |β k (t)| + |γ k (t)| C t 1/4 e −(ej +γ)t . Control of the unstable directions for k ∈ K by a topological argument Lemma 3.12 being proved, we choose t 0 large enough so that K0 t 1/4 0 1 2 . Therefore, we have We can now prove the following final lemma, which concludes the proof of Proposition 3.4. Note that its proof is very similar to the one in [2], by the common choice of notation, but it is reproduced here for the reader's convenience. We can now consider, for t ∈ [T, S n ], To calculate N ′ , we start from estimate (3.12): Multiplying by |α − k (t)|, we obtain where e min = min{e k ; k ∈ K}. By summing on k ∈ K, we get Therefore, we can estimate Hence, we have, for all t ∈ [T, S n ], where θ = 2(e min − e j − 2γ) > 0 by the definitions of γ (2.3) and of the set K. In particular, for all τ ∈ [T, S n ] satisfying N (τ ) = 1, we have Now, we definitely fix t 0 large enough so that K 3 e −2γt0 We finally deduce that T (a − ) − ε T ( a − ) T (a − ) + ε, as expected. Second consequence: We can define the map Note that M is continuous by the previous point. Moreover, let a − ∈ S R k 0 (e −(ej +2γ)Sn ). As N ′ (S n ) − θ 2 by (3.21), we deduce by definition of T (a − ) that T (a − ) = S n , and so M(a − ) = a − . In other words, M restricted to S R k 0 (e −(ej +2γ)Sn ) is the identity. But the existence of such a map M contradicts Brouwer's fixed point theorem. A Appendix Proof of Lemma 3.2. First, we calculate Hence, from the expression of Ω (3.5), it can be written We can now estimate Ω H 1 , and we estimate ∂ x Ω L 2 for example, the term Ω L 2 being similar and easier. To do this, we write To estimate all these terms in L 2 norm, we use the facts that ϕ is equal to R plus a small error term according to (2.5), that R multiplied by a term moving on the line x = v j t + x j (like r j ) is equal to R j plus a small error term according to (2.4), and finally that r j is at order e −ej t . To illustrate this, we estimate the first two terms I and II, for example, as all other terms can be treated similarly. For I, we simply remark that by the definition of γ (2.3). For II, we decompose it as Since ϕ − R H 1 Ce −4γt by (2.5), the first three terms are bounded in L 2 norm by Ce −(ej +4γ)t . Moreover, by (2.4), the next three terms are also bounded in L 2 norm by Ce −(ej +4γ)t . Finally, for the last term, we write so that, since p > 5, we can conclude similarly that II L 2 Ce −(ej +4γ)t . Proof of Lemma 3.9. But, if k > l, then and similarly, if k < l, then and the conclusion follows again from the definition of γ. For the last one, we write so that and so finally ψ kt L ∞ have a similar form, it is clear that it suffices to prove the inequalities for h 2 , for example. Moreover, the first inequalities are obvious by (iii). Finally, for the last inequality, we write dropping the argument λ k for this proof, which would not be a source of confusion since there is no time derivative. Hence, we compute Developing in terms of z, we find B Appendix We prove here Proposition 3.10. To do this, we first need a lemma quantifying the fact that ϕ almost satisfies a transport equation similar to those satisfied by the solitons. Note finally that, since ϕ t takes values in H −1 , all integrals in this appendix may be seen as the dual bracket ·, · H 1 ,H −1 . Remark B.2. To find the transport equation almost satisfied by ϕ, it suffices to compute an exact relation for R k with k ∈ [ [1, N ]]. In fact, as Proof of Lemma B.1. Let f ∈ H 1 and compute First note that, by (2.5), |I| C ϕ − R H 1 f H 1 Ce −4γt f H 1 . Moreover, by (2.4), we also have |III| Ce −4γt f L 2 . For the last term, we first compute and so, using ∂ 2 Therefore, by (iv) of Lemma 3.9, we also have |II| Ce −4γt f L 2 , which concludes the proof of Lemma B.1. From the definition of H (3.14), we now compute, using integrations by parts, For I, we have to compute
5,605.6
2010-08-26T00:00:00.000
[ "Mathematics" ]
Quantum efficiency, purity and stability of a tunable, narrowband microwave single-photon source We demonstrate an on-demand source of microwave single photons with 71–99% intrinsic quantum efficiency. The source is narrowband (300 kHz) and tuneable over a 600 MHz range around 5.2 GHz. Such a device is an important element in numerous quantum technologies and applications. The device consists of a superconducting transmon qubit coupled to the open end of a transmission line. A π-pulse excites the qubit, which subsequently rapidly emits a single photon into the transmission line. A cancellation pulse then suppresses the reflected π-pulse by 33.5 dB, resulting in 0.005 photons leaking into the photon emission channel. We verify strong antibunching of the emitted photon field and determine its Wigner function. Non-radiative decay and 1/f flux noise both affect the quantum efficiency. We also study the device stability over time and identify uncorrelated discrete jumps of the pure dephasing rate at different qubit frequencies on a time scale of hours, which we attribute to independent two-level system defects in the device dielectrics, dispersively coupled to the qubit. Our single-photon source with only one input port is more compact and scalable compared to standard implementations. INTRODUCTION The single photon-the fundamental excitation of the electromagnetic field-plays a key role in quantum physics and can find practical application in quantum sensing 1 , communication 2 , and computing [3][4][5] . Recently, considerable progress has been made in the generation of optical photons, e.g. by using quantum dots [6][7][8] . However, in the microwave domain, the much smaller photon energy introduces many constrains for the realization of singlephoton sources; for instance, operation at millikelvin temperatures is necessary to avoid thermal generation of photons. Narrowband microwave single photons are essential for precise interactions with circuits exhibiting a shaped energy structure, such as coplanar resonators 9 , three-dimensional cavities 10 , and acoustic-wave resonators 11,12 , which can be used as quantum memories. Superconducting quantum circuits are suitable for the implementation of on-demand microwave photon sources. So far, several different methods have been used. The first method is based on a qubit coupled to a resonator [13][14][15] , where the source bandwidth is limited by the linewidth of the resonator. Secondly, in refs. [16][17][18] , single photons are generated due to inelastic Cooper-pair tunneling. This type of source has a high emission rate, but it cannot generate a superposition of vacuum and a single-photon Fock state. Thirdly, a single-photon generator based on emission from a qubit into a waveguide requires proper engineering of the asymmetric couplings to the control and emission channels [19][20][21][22] . Finally, shaped single photons emitted from a qubit located near the end of a transmission line with a tunable-impedance termination 23 were demonstrated in experiment 24 . None of these experiments included a thorough study of the photon leakage of the excitation pulse from the control to the emission channel, which affects the purity of the single-photon. In this work, we implement a theoretical proposal from ref. 23 : a frequency-tunable qubit is capacitively coupled to the end of an open transmission line 25,26 . Only a single channel exists in our system, so that the qubit, excited by a π-pulse, can only release a single photon back to the input. We cancel the π-pulse, after its interaction with the qubit, by interfering it with another, phaseshifted pulse and show a photon leakage 0.5% of a photon from the excitation pulse. The intrinsic quantum efficiency of our singlephoton source is 71-99% over a tuneable frequency range of 600 MHz around 5.2 GHz, which is about 1600 times larger than the single-photon linewidth (300 kHz). This bandwidth is more than 20 times narrower than that of the tuneable microwave single-photon sources reported in refs. [16][17][18][20][21][22]24 . Different from refs. [16][17][18] , our single-photon source allows to generate a superposition of vacuum and a single-photon Fock state. Moreover, compared to other results with more than one input ports [20][21][22]24 , our single-port single-photon source does not require engineering of the asymmetric couplings on chip and is more compact and scalable as the number of sources increase. Importantly, the intrinsic quantum efficiency-the fidelity only due to the emitter coherence-can be limited by both the pure dephasing rate and the non-radiative decay rate of the emitter. It is important to understand the noise mechanisms determining these rates in order to make further improvements. We systematically study the limitation of the intrinsic quantum efficiency and the temporal fluctuations of the single-photon source over 136 h. The result shows that both non-radiative decay and 1/f flux noise can affect the quantum efficiency from different types of two-level systems (TLSs). In addition, we also characterize the fluctuations of the pure dephasing rate due to dispersively coupled TLS defects with a narrow linewidth, which can lead to a decrease of the quantum efficiency by up to 60%. RESULTS Experimental setup and procedure for single-photon emission Our device consists of a magnetic-flux-tunable Xmon-type transmon qubit, capacitively coupled to the open end of a onedimensional coplanar-waveguide transmission line. This zerocurrent boundary condition behaves as a mirror for the incoming microwave radiation. The corresponding simplified circuit diagram is shown in Fig. 1a. An asymmetric beam splitter, implemented by a 20 dB directional coupler, is connected to the sample to provide channels for qubit excitation and pulse cancellation. The circuit is made of aluminum on a silicon substrate, and is fabricated with a standard lithography process 27 . The sample is characterized at T = 10 mK with its parameters shown in Table 1. As shown in Fig. 1a, we send a pulse to the input port of the directional coupler with the amplitude a in (t)/A, where A = 0.1 is the attenuation from the −20 dB directional coupler. Then, a in (t) is the corresponding amplitude of the pulse at the qubit. The output field at the qubit, using the standard input-output relation, is a q out ðtÞ ¼ a in ðtÞ À i ffiffiffiffi Γ r p σ À ðtÞ 23,28 , where σ − (t) is the emission operator of the qubit. By adding another pulse β(t)/A to the cancellation port of the directional coupler, we have a out ðtÞ ¼ a q out ðtÞ þ βðtÞ at the output of the directional coupler. When β(t) = −a in (t), we have a out ðtÞ ¼ Ài ffiffiffiffi Γ r p σ À ðtÞ (the small red pulse). This means we obtain a single photon if a in (t) is a π-pulse, and a superposition of vacuum and a single-photon Fock state if a in (t) is a π/2-pulse. We adjust the external flux to zero (Φ = 0) so that the qubit reaches its highest frequency ω 01 . We then send a calibrated Gaussian pulse / expðÀt 2 =2ξ 2 Þ with ξ = 20 ns which is on resonance with the qubit, so that it acts as a π-pulse. We measure the output field using a traveling-wave parametric amplifier (TWPA) 29 followed by a high electron mobility transistor amplifier (HEMT) (Fig. 7). Both quadratures of the signal output from the directional coupler, with and without the cancellation, are amplified and recorded by a digitizer (not shown) as a voltage V (t) = I(t) + iQ(t). The voltage is then normalized by the system gain from the on-resonance Mollow triplet [30][31][32] . After averaging, the corresponding photon number at the qubit is defined as where t 0 and t 1 denote when the signal starts and ends, respectively. Note that for the qubit emission, t 0 is the time corresponding to the maximal amplitude of the emission. 〈V N 〉 is the averaged system voltage noise and Z 0 ≈ 50 Ω is the waveguide impedance. Figure 1b shows the power of the input π-pulse as a function of time. The black line indicates the power of the input pulse at the sample after the gain calibration when the qubit is tuned away, while the blue one corresponds to the residual pulse after cancellation. The result shows a 33.5 dB suppression of a π-pulse in power due to the cancellation, resulting in a photon leakage of n meas leak ¼ 0:0049, according to Eq. (1). In Fig. 1c, we also measure the coherent emission (red line) from the qubit decay after a π/2pulse and fit the data to an exponential curve (black) with a decay rate Γ 2 /(2π) = 193 ± 4 kHz. By taking the integral over time with Eq. (1) starting from t 0 = 252 ns, we obtain the photon numbers n meas q % 0:173 for the qubit emission. This agrees well with the formula Γ r /8Γ 2 = 0.1795 derived below. We notice that n meas q is less than 0.5 since we just measure the coherent part of the qubit emission. The leakage from the excitation pulse can also be estimated without calibrating the system gain as follows. The driven qubit generates a voltage amplitude of V q (t) = i2ω 01 Z 0 C c dσ − (t) 20 , where d is the qubit dipole moment, and C c represents the coupling capacitance between the qubit and the transmission line. The radiative decay rate is given by Γ r ¼ S v ðωÞðC c dÞ 2 =_ 2 with S v (ω) = 2 _ ω 01 Z 0 being the spectral density of the voltage quantum noise in the transmission line where we ignore the effect from the thermal noise inside the waveguide since _ω 01 ≫ k B T. Therefore, the corresponding emission power from the qubit is jV q ðtÞj 2 =ð2Z 0 Þ ¼ _ω 01 Γ r jσ À ð0Þj 2 e À2Γ2t where we have σ À ðtÞ ¼ σ À ð0Þe ÀΓ2t . By taking the integral over time, the photon number is n q = Γ r /(2Γ 2 )|σ − (0)| 2 = Γ r /8Γ 2 . Combining the values of Γ r and Γ 2 in Table 1, the leakage from the π-pulse is n leak ¼ n meas leak =n meas q n q % 0:005. In reality, |σ − (0)| < 0.5 due to the small emission during a π/2-pulse. Here, we ignore this since our pulse length is much shorter than the qubit Comparison of a π-pulse with and without the cancellation when the qubit is tuned away. The input pulse is suppressed by −33.5 dB with a cancellation pulse with 5.12 × 10 5 averages. c Comparison between the canceled π-pulse from b and the photon emission by the qubit (red line) after a π/2-pulse with the pulse cancellation on. The red line is a fit to an exponential decay to extract Γ 2 /(2π) = 193 ± 4 kHz. (1) 188 (1) The qubit parameters are obtained by single-and two-tone spectroscopy from the reflection coefficient measurements (see more details in the "Methods" section). The qubit frequency ω 01 (Φ) depends on the external flux Φ and we define ω 01,1 = ω 01 (Φ = 0). α is the qubit anharmonicity, Γ r and Γ 2 are the radiative decay rate and the decoherence rate of the qubit. The error bars within parenthesis are two standard deviations. Y. Lu et al. lifetime. We emphasize that the amplitude of the canceled pulse in Fig. 1b, c was minimized by adjusting the amplitude of the cancellation pulse, the phase difference between the input and the cancellation pulse and compensating the time delay between these two pulses. Compared to directly measuring the qubit emission power after a π-pulse, we take an advantage of the coherent emission after a π/2-pulse so that the system noise can be averaged out with much fewer averages. Qubit operation Next we vary the pulse length τ, and measure the integral of freedecay traces such as the one in Fig. 1c, normalized to the number of points in the trace. In order to maximize the signal, we digitally rotate the integrated value into the I quadrature. Meanwhile, we also record the second moment of the emitted field which corresponds to the emitted power 〈P〉 = 〈(I 2 + Q 2 )〉. Figure 2 shows the Rabi oscillations of 〈I〉 and 〈P〉 with pulse lengths up to 1.4 μs. The signal is averaged over 1.28 * 10 4 repetitions. The background offset from the system noise is removed from each data point of the power oscillation. The clear oscillatory pattern in the figure is a manifestation of the coherence of photons emitted by the qubit. By solving the Bloch equations we obtain where Γ 1 is the relaxation rate of the qubit and Ω is the Since 〈I〉 ∝ 〈σ y 〉 and 〈P〉 ∝ 1 + 〈σ z 〉, we take Eq. (2) to fit the data to obtain Γ s /2π = 316 ± 6 kHz and θ 2 + θ 1 = (0.498 ± 0.004)π. The phase difference indicates that the measured radiation is not from a coherent state in which the power and amplitude would oscillate in phase. To demonstrate that our device indeed is a single-photon source we extract the second-order correlation function g (2) (0) and we reconstruct the Wigner function W(α) 33 . We send either a πpulse or a π/2-pulse to excite the qubit. With an appropriate mode-matching filter with an exponential decay, we obtain the quadrature histograms of the measured single-shot voltages normalized by the gain value. The single-shot measurement is repeated up to 2.56 * 10 7 times. By then subtracting the reference values measured in the absence of the pulse, as outlined for example in ref. 34 , we extract the moments of the photon mode a. Figure 3a shows the moments |〈a〉|, 〈a † a〉 and hða y Þ 2 a 2 i of the qubit emission after a π-pulse and a π/2-pulse, respectively. The first and second-order moments are 0.036 ± 0.001 and 0.618 ± 0.003 for a π-pulse, and 0.399 ± 0.035 and 0.337 ± 0.002 for a π/2pulse. The second order of moments shows that the overall quantum efficiencies at the maximum qubit frequency are 61.8% for a single-Fock state 1 j i after a π-pulse, and 67.4% for a superposition state ð 0 j i þ 1 j iÞ= ffiffi ffi 2 p after a π/2-pulse. In our case, the maximum photon number is just one so that we only need to consider up to the fourth order of the moments corresponding to two photons. The moments we extract differ from the theoretically expected 〈a † a〉 = 1 for the Fock state and |〈a〉| = 0.5 for the superposition state. The numerical result from simulating the dynamics of the qubit by using QuTip 35 shows that the population of the first excited level of our qubit is given by the density matrix element ρ 11 = 0.93 after a π-pulse with ξ = 20 ns, and |σ − | = 0.44 after a π/ 2-pulse with ξ = 20 ns. The normalized filter for the mode matching is f ðtÞ ¼ ffiffiffiffi ffi Γ 1 p e ÀΓ1=2t , leading to 〈a † a〉 = Γ r /Γ 1 and In summary, we have 〈a † a〉 = 0.93*Γ r / Γ 1 and hai ¼ 0 Combining the decay rates in Table 1 and assuming that the pure dephasing rate is zero (Fig. 4b) at the maximum qubit frequency (Φ = 0), we get 〈a † a〉 = 0.67 and |〈a〉| = 0.36, which are close to our measured results. From this discussion, we can conclude that the non-radiative decay is the main factor that limits the quantum efficiency of our singlephoton source at the flux sweet spot, and the overall quantum efficiency are limited by both the imperfect qubit excitation and the qubit coherence. Of particular interest is the normalized zero-time-delay intensity correlation function g ð2Þ ð0Þ ¼ hða y Þ 2 a 2 i=ha y ai 2 . Its values of 0 ± 0.0139 and 0 ± 0.0264 for π and π/2-pulses show an almost complete antibunching of the microwave field, demonstrating that the output is almost purely a single photon. To further demonstrate that our source is nonclassical, in Fig. 3b, we reconstruct the Wigner function from the relation WðαÞ ¼ ð2=πÞTr½DðαÞρD y ðαÞΠ, by using a maximum likelihood method 34,36 , whereDðαÞ is the displacement operator with a coherent state α,Π is the parity operator and ρ is the extracted density matrix of the filtered output from the different orders of moments. Besides the photon leakage, there are a number of different properties that are important for proper operation of the singlephoton source, such as frequency tunability, quantum efficiency, stability, bandwidth and repetition rate. In the following paragraphs we study and evaluate these quantities for our single-photon source. Bandwidth, repetition rate, and tunability The repetition rate for our source is limited by the coupling strength between the qubit and the transmission line which can be varied over a wide range by design. For our sample the relaxation rate at the sweet spot is~2π × 376 kHz, resulting in a repetition time of about 2.5 μs where the time is several times longer than the qubit lifetime T 1 = 1/Γ 1 ≈ 420 ns. Our single-photon source is frequency-tunable over a wide frequency range. The operation frequency is adjusted by changing the qubit frequency with the external magnetic flux and adjusting the frequency of the microwave source that generates the π-pulse and the cancellation pulse. Here we show tunability of up to 600 MHz, where it is limited by flux noise producing large jumps in the qubit frequency when the qubit is tuned too far away from the flux sweet spot (Φ = 0). Intrinsic quantum efficiency Different from the overall quantum efficiency, the intrinsic quantum efficiency only depends on the qubit coherence, which is the upper bound for the overall efficiency. We also investigate the intrinsic quantum efficiency which is given by η q = Γ r /(2Γ 2 ), of our single-photon source over the frequency range 4.9-5.5 GHz. The quantum efficiency is in the range 71-99% (red in Fig. 4a), extracted from the reflection coefficient. Typically, the pure dephasing rate Γ ϕ can decohere the supposition of vacuum and a single-photon Fock state, resulting in a decrease in the single-photon quantum efficiency. Moreover, a single photon can be dissipated into the environment through the non-radiative decay channel due to the qubit interaction with the environment. We denote that the reduction of the quantum efficiency from these two effects as η p = Γ ϕ /Γ 2 and η n = Γ n /2Γ 2 , respectively. Here, the values of η p are based on the exponential decay from the qubit emission as discussed below (black, in Fig. 4a). Then, we calculate η n indirectly, from η n = Γ n /2Γ 2 = 1 − η p − η q (blue, in Fig. 4a). we find that the non-radiative decay only affects the quantum efficiency near the maximal qubit frequency. When we tune the qubit frequency down, the pure phasing dominates the reduction of the quantum efficiency. Therefore, it is necessary to understand which type of noise induces the pure dephasing rate. To extract Γ ϕ , we send a pulse with the amplitude close to a π/2pulse, and measure the qubit emission with 3.84 × 10 7 averages. From the emission decay, we can extract both Γ 1 and Γ 2 , the power decay / e ÀΓ1t , and the quadrature decay / e ÀΓ2t . Then, Γ ϕ can be calculated from Γ ϕ = Γ 2 − Γ 1 /2. In Fig. 4b, the data (black) shows that the pure dephasing rate increases when the qubit is tuned away from the flux sweet spot further. The averaged pure dephasing rate Γ ϕ over the whole frequency range is about 2π*10 Hz. The pure dephasing rate Γ ϕ due to 1/f flux noise with the flux noise spectral density S Φ (f) = A Φ /f has the relationship 37 . f IR is the infrared cutoff frequency, taken to be 5 mHz determined by the measurement time, and t is on the order of Γ ϕ À1 . Using this relationship to fit the extracted Γ ϕ values shown as a dashed line in Fig. 4b, we obtain A 1=2 Φ % 2μΦ 0 , which is consistent with other measurements 37,38 . In Fig. 4c, from 5.51 to 5.39 GHz, we find that the non-radiative decay rate Γ n decreases gradually from 100 kHz to zero. We suspect that some TLSs with a certain bandwidth are located around the flux sweet spot. Since in this range the pure dephasing rate Γ ϕ is less than 20 kHz (Fig. 4b) with the increasing rate slower than the reduction rate of Γ n , the quantum efficiency of our singlephoton source in Fig. 4a grows from 80 to 94%. Especially near the flux sweet spot, the non-radiative decay is several times larger than the pure dephasing rate, leading to the quantum efficiency mainly limited by the non-radiative decay. At the two exceptional data points where we obtain negative values of Γ n (Fig. 4c), around 5.2 GHz, the efficiency is up to 99%, indicating that during the reflection coefficient measurement at this frequency, both the pure dephasing rate and non-radiative decay rate are very small. When we tune the qubit frequency further away from the flux sweet spot, the non-radiative decay remains close to zero whereas the pure decay rate is increased to be around 35 kHz, leading to a (a) (b) (c) Fig. 4 Intrinsic quantum efficiency, pure dephasing and nonradiative decay rates as a function of the qubit frequency. a The intrinsic quantum efficiency η q for our single-photon source over the 600 MHz tunable range. The efficiency is limited by the pure dephasing rate and the non-radiative decay rate of the qubit. These two factors reduce the efficiency by η p and η n respectively, where we have η q + η p + η n = 1. b Pure dephasing rate Γ ϕ as a function of the qubit frequency. c Nonradiative decay rate Γ n as a function of the qubit frequency, where we have Γ Decay Stability Recently, many works demonstrated that fluctuating TLSs can limit the coherence of superconducting qubits 27,[39][40][41] . Here, we investigate how the fluctuations affect different properties of our single-photon source. We repeatedly measure Γ 1 and Γ 2 interleaved at Φ = 0 and Φ = 0.09Φ 0 , corresponding to ω 01,1 = ω 01 (0) = 2π × 5.51 GHz and ω 01,2 = ω 01 (0.09Φ 0 ) = 2π × 5.39 GHz, respectively. At the same time, the fluctuations of the qubit frequency are also obtained from the phase information of the emitted field which carries information about the qubit operator hσ À i / e iδω01t where δω 01 is the frequency difference between the frequency of the driving pulse and the qubit frequency. The total measurement spans 4.90 × 10 5 s (~136 h) with 2000 repetitions for each qubit frequency. Each repetition has 3.20 × 10 6 averages. From the values of Γ 1 and Γ 2 , we extract Γ ϕ values shown in Fig. 5a, b from averaging over 8 repetitions. We find that Γ 1 remains stable for both zero detuning and 120 MHz detuning in Fig. 6. By assuming that Γ r is stable over time, this implies that for this detuning Γ n is also stable on the scale of Γ r . However, the fluctuations of the qubit frequency, δ f,i = (ω 01,i − 〈ω 01,i 〉)/2π and the pure dephasing rate are obvious as shown in Fig. 5a for δ f,1 and (b) for δ f,2 . First, we note the frequency jumps for the case of 120 MHz detuning (i.e. around ω 01,2 ) at t = 16 h and t = 90 h do not affect the pure dephasing rate. We suspect that this is due to a change in the flux offset through the SQUID, as we tune the qubit back and forth by the applied external flux that could induce a change in magnetic polarization in cold components. Therefore, we can not see significant fluctuations at the flux sweet spot. Other frequency-switching events happening at t = 95 h and t = 120 h for 0 MHz detuning (i.e. around ω 01,1 ) and those before t = 10 h and at t = 64 h for 120 MHz detuning show a strong positive correlation with the pure dephasing rate. Interestingly, the fluctuations do not happen at the same time for both detunings. Combining this with the fact that Γ 1 is stable, we speculate that this is due to two uncorrelated TLSs with a small decay rate γ i (i = 1, 2), close to ω 01,i , dispersively coupled to the qubit (see more details in the section "Methods"). Thus, these two TLSs can only cause the pure dephasing, but not dominate the relaxation, which can explain the stronger fluctuations in Γ ϕ compared to Γ 1 shown in Fig. 6. Evidently, these two TLSs reduce the intrinsic quantum efficiency substantially by up to 40% and 60% as shown in Fig. 5c for detunings of 0 and 120 MHz, respectively. The effect from TLSs is stronger than other types of noises, especially in the case of zero detuning. At zero detuning we also note that between these large fluctuations the single-photon source can be stable for tens of hours. However, the qubit becomes more sensitive to the 1/f flux noise when it is detuned by 120 MHz, it results in about a 20% fluctuation of the quantum efficiency over the total measurement time. This indicates that 1/f flux noise will be the dominant noise when we tune the qubit frequency away from the flux sweet spot. Since our single-photon source has a narrow bandwidth it will be meaningful to investigate the frequency stability over a long time, from Fig. 5a, b, we find that at Φ = 0 the frequency fluctuations due to TLSs can be up to 100 kHz which is nearly one third of the single-photon linewidth (Γ r = 270 kHz). However, just tuned down the qubit frequency by 120 MHz (Φ = 0.09), the external flux jumps described above dominate the frequency shift of the single-photon source, the shifts can be up to 200 kHz which is a factor of two compared to the effect from TLSs. DISCUSSION In this paper, we demonstrate a method to implement a frequency-tunable single-photon source by using a superconducting qubit. We measure the moments of the emitted field, and from those we can evaluate both the second order correlation function and the Wigner function. Our study illustrates that the intrinsic quantum efficiency of our single-photon source can reach up to 99%, which could be improved further by engineering a large radiative decay rate of the qubit into the waveguide transmission line. Moreover, the photon leakage from the canceled input πpulse is as low as 0.5% of a photon, indicating that our singlephoton source is very pure. The frequency tunable range of our single-photon source corresponds to 1600 × Γ 1 , reaching state of the art and enabling us to address quantum memories with a large number of different 'colors. ' We also study the noise mechanisms which limit the intrinsic quantum efficiency in detail. The non-radiative decay rate and the pure dephasing rate from the 1/f flux noise both contribute to the reduced quantum efficiency. The 1/f flux noise could be decreased by reducing the density of surface spins by surface treatment of the sample, e.g. annealing 42 and UV illumination 43 . Finally, we investigate the stability of our single-photon source, which is important for long time operation. The instability originates mainly from the increased sensitivity to 1/f flux noise when the source frequency is tuned down from the fluxinsensitive bias point. The results show that the source can be stable for tens of hours at the maximum frequency. However, sometimes, the quantum efficiency decreases by up to 60% when the qubit couples to TLSs. Besides reducing the quantum efficiency, the TLSs can also change the frequency of the single photons by up to one third of the linewidth. However, the environment flux jump will be the dominant noise to shift the single-photon frequency, which could be further reduced by magnetic shields e.g. Cryoperm shielding 27,44 . Figure 7a shows the detailed experimental setup. To characterize the qubit a vector network analyzer (VNA) generates a weak coherent probe with the frequency ω pr . The signal is fed into the input line, attenuated to be weak (Ω < Γ 1 ) and interacts with the qubit. Then, the VNA receives the reflected signal from the output line after the amplification to determine the complex reflection coefficient, r. Two-tone spectroscopy is then done to obtain the qubit anharmonicity. Specifically, we apply a strong pump at ω 01 to saturate the 0 j i À 1 j i transition. Meanwhile, we combine a weak probe with the strong pump together via a 20 dB directional coupler. The frequency of the weak probe from the VNA is swept near the 1 j i À 2 j i transition. When the probe is on resonance, again, we will get a dip in the magnitude response of r, leading to α = (ω 01 − ω 12 )/_ = 2π*0.251 GHz (not shown). Fano-shape spectroscopy When we measure the reflection coefficient at different qubit frequencies, we notice that at some frequencies the amplitude of the spectroscopy is not flat but has a Fano shape (Fig. 8a). This Fano shape may affect the extracted Γ r values, and we argue that the Fano shape originates from an impedance mismatch in the measurement setup which will result in a modified reflection coefficient as 31 : where tanðϕÞ ¼ r 1 sin 2ϕ 0 t 2 1 β 2 þ r 1 cos 2ϕ 0 ; (4) r 1 (t 1 ) is the reflection (transmission) coefficient at the place where the impedance mismatch is located, and β is proportional to the attenuation between the place and the sample. ϕ 0 = ωτ is the extra phase of the propagating wave from the propagating time τ, due to the distance between the qubit and the impedance mismatch. We use Eq. (3) to fit the data to extract the values of Φ at different qubit frequencies which are thus fit to Eq. (4) as show in Fig. 9a. The extracted r 1 ≈ 0.14 close to 0.1 (corresponding −20 dB in power) and β ≈ 0.97 corresponding to 0.26 dB attenuation indicate that the impedance mismatch probably arises from the directional coupler. Afterwards, to compensate the impedance mismatch, we calculate r comp = 1 − (1 − r raw ) * e iϕ where r raw is the raw data (blue in Fig. 8a, b). The magnitude response of r comp in Fig. 8c manifests that the impedance Fig. 7 The measurement setup. LP, Iso, HEMT, and TWPA denote low-pass filters, isolators, a high electron mobility transistor amplifier, a traveling-wave parametric amplifier. mismatch has been corrected. We repeat this process for other qubit frequencies and then fit the calculated data to obtain Γ r and Γ 2 (red stars in Fig. 9b, c). Comparing to the values before correcting the impedance mismatch (blue dots in Fig. 9b, c), we find they are close to each other. Two-level fluctuator model Figure 6 shows the fluctuations of Γ 1 and Γ 2 at ω 01,1 /(2π) = 5.51 GHz and ω 01,2 /(2π)5.39 GHz over 136 h. We denote g i and Δ i = (ω TLS,i − ω 01,i )/(2π) as the coupling strength and the frequency detuning between the TLS and the qubit, respectively. In addition, Γ n,i and Γ 1,i are the corresponding nonradiative decay rate and relaxation at each qubit frequency. To simplify the model, we let g 1 = g 2 . When g i ≪ Δ i ≪ 120 MHz, we have a dispersive shift Typically, the surface TLS coupling rates are on the order of g ≈ 100 kHz 40 . Since the measured frequency shifts of both qubit frequencies are almost the same, about 40 kHz, the detuning to such a TLS isΔ MHz, which is about 9 × Γ 1,i . From the shortest duration of the TLS fluctuations in Fig. 5b, we can estimate the switching time of these two TLSs roughly to be 2.88 × 10 4 s and 7.82 × 10 3 s, corresponding to γ 1 = 34.7 μHz and γ 2 = 127.9 μHz, respectively. According to Γ n;i / g 2 i =Δ 2 i γ i ¼ 0:16%γ i . Thus, these two TLSs can only cause the pure dephasing, but not dominate the relaxation. This can also explain the stronger fluctuations in Γ ϕ compared to Γ 1 shown in Fig. 6. We emphasize that the fresh finding here is that we notice TLSs can be activated independently where there is only a single TLS was investigated in ref. 40 . Directional coupler parameters Directional coupler parameters are shown in Table 2. The approximate values of the commercial directional coupler used in our setup as measured by a VNA. Even though these values are measured at room temperature, they should be close to the values at 10 mK. The attenuation between the input and a in is about −21.7 dB including the insertion loss, very close to −20 dB which is the value printed on the coupler. Measurement consistency In order to obtain the non-radiative decay rate and the quantum-efficiency reduction from that, we need to combine results from different measurements as we discussed in Fig. 4. We obtain Γ n = Γ 1 − Γ r where Γ 1 can be either measured from the exponential decay of the qubit emission or the power spectrum, and Γ r is based on the reflection coefficient. Therefore, it is necessary to check whether the qubit is stable over these measurements. In Fig. 10, we show the extracted values of Γ 2 from different methods, over the frequency range of 4.91-5.51 GHz. We find that the values of Γ 2 from different methods agree well except for the data points at 5.2 and 5.3 GHz. This inconsistency is probably due to the redistribution of TLSs 27,39 between different measurements, since there are a few-days delay when we take these different measurements. Because of this inconsistency, we have slightly negative values of Γ n and η n as shown in Fig. 4a, c, respectively. DATA AVAILABILITY The data that supports the findings of this study is available from the corresponding authors upon reasonable request. The values are measured in the room temperature at the maximal qubit frequency by the VNA with the fluctuations from the maximum to the minimum less than 1 dB in range of 4.8-5.6 GHz. Cancel and In present the cancellation port and the input port in Fig. 1a, respectively. The measured attenuation between ports Cancel ↔ a out is the same as that between ports In ↔ a in . are extracted from the reflection coefficient, off-resonant Mollow-triplet power spectrum and the exponential decay of the qubit emission after a π/2-pulse, respectively. In the plot, the error bars are for two standard deviations. Note that the slight difference at 5.1 GHz between the Mollow-triplet and other two measurements is from that the sampling rate of the digitizer is not large enough, leading to the truncation at the edge of the measured power spectrum. See more detail about how to use the power spectrum to extract the qubit decay rates in ref. 30 . The error bars are two standard deviations.
8,415.4
2021-09-23T00:00:00.000
[ "Physics" ]
A File Encoding Using A Combination of Advanced Encryption Standard, Cipher Block Chaining and Stream Cipher In Telkom Region 4 Semarang The increase in significant advances in information technology greatly provides comfort and convenience in managing data. This convenience is what makes people who are not responsible for using it as a crime such as hacking, cracking, phishing, and so on. In Telkom Region 4 Semarang, there is a container where there are important company data such as customer data. Customer data is very important and the contents of the data must be kept confidential. The company has experienced significant losses due to information leakage due to negligence in the last 5 years. For this reason, data security is necessary so that data is safe and is not misused. This study applies the Advance Encryption Standard algorithm Cipher Block Chaining (AES-CBC) and Stream cipher in order to secure data so as to reduce the risk of data theft by telecom subscribers. Based on the average avalanche effect value of AES-CBC and a stream cipher of 49.34%, this shows that the AES-CBC and Stream Cipher encrypted files are difficult to crack so that data confidentiality is well maintained. INTRODUCTION The increase in significant advances in information technology greatly provides comfort and convenience in managing data. Along with this convenience, negative impacts also occur, such as threats to the security of confidential personal data. This convenience is what makes people who are not responsible for using it as a crime such as hacking, cracking, phishing and so on. Of course this will harm certain parties such as state secrecy or the confidentiality of important company data. In August 2013 ago, one of the biggest websites, Yahoo, was hacked by hackers, approximately 3 billion accounts were stolen. The hacker managed to get user account information such as name, email, telephone number, date of birth, password that was received by MD5, to security questions and answers [1]. The impact of the hack made Verizon's acquisition value of Yahoo drop by approximately USD 1 billion. In Telkom Region 4 Semarang there is a website dashboard where there are important company data such as customer data and so on. Customer data is very important and the contents of the data must be kept confidential. The dashboard of the website can only be accessed by Telkom employees who have obtained access permits only. However, it does not rule out the possibility of data theft, such as a third party who managed to get an account to access the dashboard, if this customer data falls into the hands of an irresponsible third party and is misused for personal gain, of course this is very detrimental to the Telkom and its customers. For this reason, data security is necessary so that the data is safe and is not misused. There are many ways to secure data, including changing data using cryptographic techniques [2]. With data cryptography techniques are encoded or encrypted into confidential data so that the data will not mean anything to unauthorized parties who successfully access the data [3]. Confidential data that has been encrypted and received by the recipient can be changed back or described to the original data so that it can be understood. There are several algorithms that can be used to encrypt data, two of which are the Advance Encryption Standard -Cipher Block Chaining (AES-CBC) and Stream ciphers [4]. The AES algorithm is a block cipher algorithm that uses a permutation and substitution system (P-Box and S-Box) instead of the Feistel network like block ciphers in general. AES or often called Rijndael has been established by the National Institute of Standards and Technology (NIST) as a replacement for DES in current cryptographic standards [5]. As with block cipher algorithms in general, the Rijndael algorithm can be run in several modes of operation, namely Electronic Code Block (ECB) [6], Cipher Block Chaining (CBC) [4], Cipher Feedback (CFB) , and Output Feedback (OFB). According to research [5] the level of security using the Cipher Block Chaining (CBC) operation mode is safer than the AES / AES Electronic Code Block (ECB) operation mode. In CBC, the feedback technique applies to a block of bits where the encryption results from the previous block are feedback for the encryption and decryption of the next block. In other words, each block of ciphertext is used to modify the encryption and decryption process in the next block. CBC mode requires IV (Initialization Vector) to be used as the initial encryption process [4]. Stream Cipher is a type of symmetric key cipher algorithm. Where the key for encryption is the same as the key for decryption. This algorithm encrypts the plaintext into ciphertext by substituting bits per bit. Stream ciphers use the XOR function, where the plaintext is XORED with a key stream generator or keystream generator [7]. The level of security of the stream cipher lies in the key stream generator. The more random the output generated by the key stream generator, the more difficult the cryptanalyst will solve the ciphertext. To prevent attacks on the AES-CBC algorithm, a stream cipher algorithm is added to strengthen the encryption process and be more secure against cryptanalysis. Encryption Decription Encryption is the process of securing data or encrypting data before the original data is sent to the recipient [8]. The encryption process converts the original data or plaintext into ciphertext, while the decryption process is the process of returning the ciphertext to its original plaintext. It takes a cryptographic cipher or algorithm and a key in the encryption and decryption process [9]. The purpose of encryption is to hide messages or information from unauthorized parties. In general, the encryption anda decryption process can be formulated as shown in (1) and (2). Where E is Encryption Process, D is Decryption Process,K is Key, P is Original or Plaintext message and C is Ciphertext. To perform the encryption process, input in the form of plaintext and key is needed so that it can produce ciphertext [10]. Meanwhile, the decryption process requires input in the form of ciphertext and keys to be able to produce plaintext. Advanced Encryption Standard AES is the Rijndael algorithm invented by Dr. Vincent Rijmen and Dr. Joan Daemen. AES is a symmetry algorithm and block cipher [11]. Thus this algorithm uses the same key at the time of encryption and description and the input and output are blocks with a certain number of bits. The Rijndael algorithm was established by NIST (National Institute of Standards and Technology) as AES (Advanced Encryption Standard) 2000 in October. Rijndael has a key length of 128 to 256 bits in 32 bit steps [12]. Because AES has a fixed key length of 128, 192, and 256 and full support of the flexible Rijndael algorithm, AES is currently known as AES-128, AES-192, AES-286. Here are the differences between the three versions of AES as shown in Table 1. Using the key Nk = 4 words or words which each word consists of 32 bits, the total key is 128 bits. Since the total key is 128 bits, there are 2 128 = 3,4 × 10 38 possible keywords. This process would take up to 5,4x1024 years to complete even with a computer capable of processing one million keys per second. The encryption and decryption process in the AES algorithm consists of 4 types of bytes transformations, namely SubBytes, ShiftRows, Mixcolumns, and AddRoundKey. At the beginning of the encryption process, plaintext will undergo an AddRoundKey byte transformation. After that, the resulting state will undergo transformation of SubBytes, ShiftRows, MixColumns, and AddRoundKey repeatedly for Nr rounds. For the last round it is different from the previous rounds where in the last round, the state does not undergo a MixColumns transformation. Meanwhile, the decryption process is the opposite of the encryption because AES is a symmetric key, the key used for the sender and receiver is the same. Cipher Block Chaining (CBC) CBC mode uses feedback operations, also known as chaining. The encryption result of the previous block is feedback for encryption and decryption of the next block. In other words, each ciphertext block is used to modify the encryption and decryption process in the next block. In CBC mode [6], random data is required as the first block for encryption. This random block of data is often called an initialization vector or IV. The IV can be given by the user or generated randomly by the program. To produce the first block cipher, IV is used to replace the previous block ciphertext. In contrast to the decryption, the first plaintext block is obtained by XOR-XORing the results of the decryption of the first ciphertext block [13]. Stream Cipher Stream Cipher is a type of symmetric key cipher algorithm, where the key for encryption is the same as the key for decryption [14]. This algorithm encrypts the plaintext into ciphertext by substituting bits per bit [7]. Stream ciphers use the XOR function, where the plaintext is XOR as in (3). Where C is Ciphertext, P is Plaintext and K is Key. The level of security of the stream cipher lies in the key stream generator. The more random the output generated by the key stream generator, the more difficult the cryptanalyst will solve the ciphertext [15]. Proposed Method In this research, the original plaintext or message will be encrypted first using the Advance Encryption Standard algorithm -Cipher Block Chaining (AES-CBC) first to produce temporary ciphertext and then the temporary ciphertext will be re-encrypted using the Stream Cipher algorithm so as to get the final ciphertext result. Meanwhile, in the decryption process, the final ciphertext will be returned again like the original plaintext or message. The decryption process also uses the same algorithm as used in the previous encryption process. In the flowchart as shown in Figure 2, it can be explained how the encryption process is carried out, as follows: 1) For the first step, input a .xlsx file, key for AES-CBC and key for Stream Cipher. 2) After that, it will be XORed between the binary value of the file and the specified IV. 3) Then the XOR result will be XORed once again with the AES-CBC key binary. 4) Then the calculation results will enter the SubBytes process, which is to substitute each byte using the substitution table (SBox). 5) The next process is to do ShiftRows, which experiences a shift on each line, other than the first line. The 2nd row will be shifted to the left 1 time (1 byte), the 3rd row 2 times (2 bytes), and finally the 4th row 3 times (3bytes). 6) Next, the MixColumns process is to multiply each column of the state array by the predefined polynomial a (x). The multiplication process is the same as a matrix multiplication. 7) The result of MixColumns will then undergo the AddRoundKey process, which is to XOR with a round key. The round key is obtained from the calculation of the cipher key entered. 8) The process will be repeated Nr (N round), except for the last round (10th round) which did not undergo the MixColumns transformation 9) The final result of AES-CBC encryption will be re-encrypted using a stream cipher algorithm, namely XORing with a key stream. This final result will be the final ciphertext. While the flowchart of the encryption and decryption process can be seen in Figure 2, in this figure it will be explained that the decryption process is the reverse direction of the encryption process where the ciphertext file will be encrypted first with a stream cipher then the results of this encryption will be re-encrypted with AES-CBC to get plaintext end or original file. RESULTS AND DISCUSSION In this research, using files with the extension *.xls and *.xlsx as encrypted media. The application is made with the Visual Basic programming language. NET. The encryption algorithm used in this application is the AES-CBC algorithm and Stream Cipher. By entering the correct key or the same as the previous encryption process, Figure 17 shows the AES-CBC decryption process was successfully carried out. Here, we used Black-box testing as tool to evaluate our experiment. Black-box testing is a test that is carried out only from the outside (interface) and without knowing what is actually happening in the detailed process. Black Box Testing is intended to train the entire functional unit of the application so that the application can work properly without experiencing system failure. From the Black Box test in Table 2, it can be concluded that the implementation of the Advance Encryption Standard Algorithm -Cipher Block Chaining (AES-CBC) and Stream ciphers can run well. In order to evaluate further, in this study we used the Avallache Effect calculation as shown in Table 3. This test is conducted to analyze the performance and security of a cryptographic encryption algorithm. Here, the avalanche effect value is obtained through the value of the number of different bits from the comparison of plaintext and ciphertext, divided by the total number of bits overall in this study taking one hex value block from each sample data as shown in (5). An avalanche effect is said to be good if the resulting bit change is between 45-60% [16] [17]. The more bit changes that occur, the more difficult the cryptographic algorithm will be to solve. Avalanche Effects = Different bits Total bits x 100% Table 4. The size difference test is carried out in order to know the size of the size change that occurs after the application performs the encryption and decryption process. From the data from the size change test results above, it can be concluded that the encryption process AES-CBC and Stream Cipher, the encryption size does not change the bit size or is still the same as the original size. So that the algorithm used is proven to secure data without any change in size. The last testing, has been done by running time. The process running time stage is carried out in order to know the processing time required by the application to perform the encryption and decryption process as shown in Table 5 and Figure 4. Based on the results of the tests carried out in Table 5, the difference in encryption and decryption time needed to process is not much different from the maximum value of the difference of 51 ms. And the file size affects the length of the encryption and decryption process, the larger the file size the longer the encryption and decryption process takes. CONCLUSION From the research conducted by researchers covering the design stages to the implementation of the Adnvance Encryption Standard-Cipher Block Chaining and Stream Cipher cryptographic applications, the following conclusions were obtained: 1. From the results of block box testing, the application can run well in encrypting and redecrypting excel files (.xlsx) using the Visual Basic programming language. 2. From the results of the avalanche effect calculation, the average value of the Adnvance Encryption Standard-Cipher Block Chaining and Stream Cipher algorithm is 49.34%. This shows that using the AES-CBC algorithm and the Stream Cipher file encryption proved difficult to crack so that it can secure files properly. 3. Data after going through the encryption and decryption process does not change and is not damaged (the same as the original file), in other words the Advance Encryption Standard-Cipher Block Chaining and Stream Cipher methods run smoothly and successfully. 4. The encryption process is AES-CBC and Stream Cipher, the encryption file size has not changed or is still the same as the original size. So that the algorithm used is proven to secure data without any change in size. 5. The time required for the encryption and decryption process is not much different and the file size affects the length of time the encryption and decryption process takes. From the research conducted, suggestions that are useful in the development of this study uses 128-bit AES-CBC, therefore for further research it can be tried with 192-bit or 256-bit AES-CBC. The key used for the Stream Cipher would be better if it could be longer. Future research is expected to have a variety of different combinations to choose from.
3,773
2021-12-06T00:00:00.000
[ "Computer Science" ]
The Binge Eating Genetics Initiative (BEGIN): study protocol Background The Binge Eating Genetics Initiative (BEGIN) is a multipronged investigation examining the interplay of genomic, gut microbiota, and behavioral factors in bulimia nervosa and binge-eating disorder. Methods 1000 individuals who meet current diagnostic criteria for bulimia nervosa or binge-eating disorder are being recruited to collect saliva samples for genotyping, fecal sampling for microbiota characterization, and recording of 30 days of passive data and behavioral phenotyping related to eating disorders using the app Recovery Record adapted for the Apple Watch. Discussion BEGIN examines the interplay of genomic, gut microbiota, and behavioral factors to explore etiology and develop predictors of risk, course of illness, and response to treatment in bulimia nervosa and binge-eating disorder. We will optimize the richness and longitudinal structure of deep passive and active phenotypic data to lay the foundation for a personalized precision medicine approach enabling just-in-time interventions that will allow individuals to disrupt eating disorder behaviors in real time before they occur. Trial registration The ClinicalTrials.gov identifier is NCT04162574. November 14, 2019, Retrospectively Registered. The Binge Eating Genetics Initiative (BEGIN) is a multipronged research study that 1) examines the interplay of genomic, gut microbiota, and behavioral factors to explore etiology and develop predictors of risk, course of illness, and response to treatment in BN and BED; and 2) optimizes the richness and longitudinal structure of deep passive and active phenotypic data to lay the foundation for a personalized precision medicine approach enabling just-in-time interventions that will allow individuals to disrupt eating disorder behaviors in real time before they occur. Genomics Despite their prevalence and the attendant personal and social costs, research into the genetic underpinnings of BN and BED is essentially absent. BEGIN represents the first contribution to a global effort to amass an adequate sample size to conduct a genome-wide association study of BN and BED in collaboration with the Eating Disorders Working Group of the Psychiatric Genomics Consortium (PGC-ED). The PGC-ED has rapidly advanced the study of the genomics of anorexia nervosa [13,14] identifying eight significant loci and reporting a panel of genetic correlations suggesting that anorexia nervosa may have both psychiatric and metabolic etiological underpinnings. BEGIN will further the mission of the PGC-ED by launching a parallel investigation into BN and BED. Intestinal microbiota Inspired by reports of associations between enteric microbes, host metabolism, and host behavior [15][16][17] along with reported differences between gut microbiota composition from patients with anorexia nervosa and healthy individuals [18][19][20][21][22][23][24][25], we have incorporated the study of the intestinal microbiota into BEGIN. In addition to characterizing the biogeography of the human microbiome (cumulative genomes of the microbiota) of BEGIN participants, our analyses will identify associations between genes and microbial composition in BN/BED. The intention is to better understand the biological mechanisms of these illnesses in an effort to help identify potential drug targets and opportunities for novel interventions. Deep phenotyping We are capturing real-time longitudinal digital phenotypic data on individuals with BN/BED that reflect the true complexity of human behavior. Using Apple Watch and iPhone devices, we are collecting active data on binge-eating, purging, nutrition, mood, and cognitions with a widely-used cognitive-behavioral based eating disorder app Recovery Record [26] and passive sensor data via native applications collected over a 30-day period. We will combine active Recovery Record-based measures and passively collected, continuous, sensor-based measurements of autonomic nervous system (ANS) activity and actigraphy to characterize patterns of when and where individuals are more/less likely to binge and/or purge in their daily lives. Finally, across and within individuals, we will identify low-risk and high-risk passive data patterns that will facilitate the prediction of transitions to high risk states signaling impending binge or purge episodes (time-stamped by active app monitoring). This work has the potential to transform the standard of care for BN and BED by transcending current cognitivebehavioral therapy (CBT) approaches typically dependent on retrospective self-report and giving patients a tailored tool that will help them intervene when they need help the most. Specific aims genomics and microbiota In 1000 individuals with BN/BED, we will, Aim 1: Contribute genomic data to the next genomewide association study (GWAS) conducted by the Eating Disorders Working Group of the Psychiatric Genomics Consortium (PGC-ED) of BN/BED. Aim 2: Comprehensively characterize the biogeography of the human microbiome using high-throughput sequencing of the microbial 16S rRNA gene and shallow shotgun sequencing. Aim 3: Employ novel and develop new analytic methods to integrate GWAS, gut microbiota, and phenotypic data that will result in predictive algorithms that index risk, course of illness, severity, disordered eating episodes, and treatment response. Specific aims digital longitudinal phenotyping Aim 1: Conduct longitudinal deep phenotyping of 1000 individuals with BN/BED using Recovery Record and Apple Watch. Aim 2: Predict the occurrence of binge eating and purging (vomiting) episodes in individuals with BN/BED using passive sensor data. Aim 3: Test theoretically derived regulatory models of binge eating and purging behaviors as reflected in differences in temporal patterns. Aim 4: Refine our capacity to predict binge and purge episodes by augmenting passive data with contextual factors collected by Recovery Record. Participants We are recruiting 1000 individuals with BN or BED. 1) Currently meets Diagnostic and Statistical Manual for Mental Disorders -5th Edition (DSM-5 [27]) criteria for BN or BED (confirmed via validated questionnaire in screening instrument-see Measures) 2) Resident of US 3) All sexes 4) Age 18-45 years 5) Reads, speaks English 6) Existing iPhone user with iPhone 5 or later 7) Willing/able to wear Apple Watch for entire study period 8) Willing/able to use Recovery Record for the entire study period 9) Provides informed consent to have activity and selfreported Recovery Record data harvested 10) Ambulatory. Exclusion criteria 1) Currently pregnant or breastfeeding 2) Bariatric surgery due to the impact on eating patterns, including the following: (Roux-en-Y gastric bypass, laparoscopic adjustable gastric banding, sleeve gastrectomy, duodenal switch with biliopancreatic diversion, gastric balloon, AspireAssist) 3) Current use of hormone therapy 4) Inpatient treatment or hospitalization for eating disorders in the 2-weeks prior to study enrollment 5) Suicidality at screening 6) Antibiotic or probiotic use in the past 30 days (related to fecal sampling). Recruitment We are recruiting cases nationally from diverse geographical, socioeconomic, racial, and ethnic backgrounds via Recovery Record, social media and National Eating Disorders Association. Specifically, we launch tweets and Facebook posts that direct potential participants to the BEGIN url https://www.med.unc.edu/psych/eatingdisorders/research/participate-in-a-study/begin-study/ where they can take a preliminary screen. In addition, Recovery Record pushes notifications about BEGIN to users. Recruitment flow is detailed in Fig. 1. Procedure Informed consent is obtained digitally via the Recovery Record app. Participants complete an eating disorders diagnostic questionnaire. Those who screen case positive and meet all inclusion criteria are offered the opportunity to participate in the full study (with a second digital informed consent). All responses to questionnaires are encrypted and sent to a secure research server at the UNC Sheps Center for Health Services Research using secure transfer methodologies, who compile and house the data in servers specifically designed for Protected Health Information. Data are de-identified (Sheps Center maintains the key to match records). Study data from Recovery Record and the Apple Watch are maintained by Recovery Record and only includes passive and active sources necessary for analyses, minimizing exposure of protected health information. To ensure that a high level of security is maintained, data transfer from Recovery Record occurs with end-to-end encryption and authentication protocols. Records are only identified with a second study number that can be linked using the data from the Sheps Center. Eligible participants are mailed a package containing a description of the study, saliva collection kit, microbiome collection kit, and an Apple Watch. Saliva kits are returned directly to RUCDR Infinite Biologics where they are stored awaiting DNA extraction and genotyping. In Phase 1, participants returned microbiome kits to uBiome for sequencing; in Phase 2, kits are returned to the Carroll lab. Barcodes ensure accurate identification and coordination with phenotypic data. After enrollment and completion of the baseline survey, participants use the Apple Watches and Recovery Record for 30 days and complete midpoint and end-ofstudy surveys at 14 days and 30 days post-enrollment, respectively, to track progress of eating disorder pathology, including binge eating and purging behaviors. Deep Phenotyping Using the Recovery Record app and the Apple Watch over a 30-day period for each individual, we conduct active and passive data capture to fully characterize disordered eating behaviors, physical activity, nutrition, gastrointestinal distress, sleep, and heart rate. This generates exceptional data to enable deep characterization of the course of BN/BED. We expect that the likelihood of an event (i.e., binge/purge) will decrease over the course of 30-days and build this expectation into our statistical models. We further expect that although the likelihood of events will change over time, the dynamics of the events will not. These data can be broken down into four categories. First, self-report questionnaires are collected consisting of scales well established to relate to BN and BED (see Self-report questionnaires), measured prior to enrollment or three times across the study. Second, stratified sample intensive measurements consisting of daily mood and meal records are measured 6 times daily. Third, event contingent intensive measurements ask participants to log binge and purge episodes. Finally, continuous passive data collection captures real-time physiological and movement data. These different data will be integrated through multilevel modeling and systems continuous time modeling procedures [28,29]. Active data collection Self-report questionnaires All BEGIN study participants are screened for eligibility and consented using the Recovery Record iPhone app, which is free for users to download and is HIPAA compliant (www.recoveryrecord.com). All questionnaires are completed from within the Recovery Record app. ED100K [30] The ED100K questionnaire is a selfreport, eating disorders assessment based on the Structured Clinical Interview for DSM-5, Eating Disorders Module, administered prior to enrollment. Items assess DSM-5 criteria for anorexia nervosa, BN, BED, and other specified feeding and eating disorders. The ED100K-v1 was found to be a valid measure of eating disorders and behaviors [30]. Positive predictive values indicating that among those who had a positive screening test, anorexia nervosa Criterion B, Criterion C, and binge eating ranged from 88 to 100%. Among women who had a negative screen, the probability of not having these criteria or behaviors ranged from 72 to 100%. The correlation between questionnaire and interview for lowest illness-related BMI was r = 0.91. Eating disorders examination questionnaire (EDE-Q) [31] The EDE-Q is a widely used, validated questionnaire capturing eating disorders pathology, including the frequency and severity of binge episodes. The EDE-Q is administered at baseline, midpoint, and endpoint of the 30-day period. The Patient Health Questionnaire (PHQ-9) [32] Is a 9-item, self-administered version of the PRIME-MD diagnostic instrument for common mental disorders. The nine items are based on the nine DSM-IV criteria for major depressive disorder and are scored as "0" (not at all) to "3" (nearly every day). The PHQ-9 has been found to be a reliable and valid measure of depression severity. The PHQ-9 is administered at baseline, midpoint, and endpoint of the 30-day period. The Generalized Anxiety Disorder 7 (GAD-7) [33] Is a 7-item, self-report questionnaire to screen for generalized anxiety disorder. Each symptom is scored on a 3point scale: "not at all" (0), "several days" (1), or "more than half the days" (2). Items are then summed to create a symptom severity score. The GAD-7 is a reliable and valid measure of anxiety. The GAD-7 is administered at baseline, midpoint, and endpoint of the 30-day period. ADHD self-report scale (ASRS) [34] Is an 18-item questionnaire that assess symptoms associated with attention-deficit/hyperactivity disorder. Items are scored on a 5-point scale. The assessment has high internal consistency and validity [35]. The ASRS is administered at baseline. Rome III [36] To assess adult GI symptoms of the stomach and intestines, the relevant section (items 17-67) of the ROME III is administered at baseline. Stratified sampled intensive measurements Daily mood and meal records These data are collected inside the Recovery Record iPhone app that primarily targets adherence to meal monitoring tasks. Participants are prompted with a push notification six times per day corresponding to meal and snack times to complete an evidence-based CBT-style question set (what was eaten, with whom, where, and what behaviors were used) in addition to optional symptom-focused questions including current emotional state, urges to engage in eating disorder behaviors, sleeping patterns, hunger levels, gastrointestinal problems, and intrusive thoughts. Event contingent intensive measurements Binge and purge records Participants are instructed to launch the Recovery Record Apple Watch app if they have experienced a binge or purge episode (Fig. 2). Action buttons are used to quickly identify the relevant symptom and how long ago it occurred, with response options in five-minute increments ranging from "Right now" to "30 mins ago". If an urge to engage in a behavior is identified, participants are additionally asked to rate the urge strength with response options: "Not at all", "Slight", "Moderate", "Strong", and "Overbearing". Actively monitored mood, meal, binge and purge records and their respective timestamps are collected on the Recovery Record platform and shared with the research team via encrypted authenticated TLS. Ecological momentary assessment-based logging has shown moderate to strong concordance with retrospective selfreport of binge eating and purging [37]. Continuous passive data collection Apple watch The number and timing of the steps (physical activity) as well as 5-min epoch heart rate are passively collected for each study participant using the Apple Watch and harvested by the Recovery Record app using Apple's Application Program Interface (API). The Apple Watch activates the sensor approximately every 5 min to record heart rate based on 100 Hz using photo plethysmography. Built in signal processing algorithms are used to aggregate measurements to approximately 5min intervals, a rate consistent with current methodological guidelines (e.g., Berntson [38]). To minimize data loss, these variables are uploaded to the Recovery Record server each time the Recovery Record app is opened on the iPhone while the Apple Watch is nearby, or at least once per day. Biological sampling Saliva sampling and genotyping Saliva samples are collected with RUCDR Infinite Biologics saliva collection kits. GWAS profiling will be performed together with additional samples collected by the PGC-ED using the optimal platform at the time of genotyping, most likely a version of the Illumina Global Screening Array (GSA). Fecal sampling and sequencing In Phase 1, as recipients of a scientific in-kind grant from the now defunct company uBiome, we collected stool samples. uBiome comprehensively characterized the biogeography of the human microbiome using high-throughput sequencing of the microbial 16S rRNA gene and released all data to UNC for analysis. After their company dissolved in Oct 2019, all processing transferred to the Carroll Lab at UNC (I. Carroll, Director). In order to obtain highresolution taxonomic and functional microbiome data, we will perform whole genome shotgun sequencing. Raw sequence data will be quality filtered and trimmed to remove bases with Phred quality scores less than 20. Downstream bioinformatics analysis will consist of: i) taxonomic composition; ii) functional composition; iii) alpha diversity (as measured by absolute numbers of sequence variants and the Shannon index of diversity) and (quantified by Bray Curtis and UniFrac metrics); and iv) computing descriptive statistics and identifying groups within the data, as well as performing statistical analyses between subgroups using additional metadata, where available [39]. Since the sequencing technology and bioinformatics tools are rapidly advancing, we will utilize the most suitable methods and tools available at the time of analysis. Planned data analysis Genomics and microbiota aims We will combine BEGIN samples with other samples in the PGC-ED for meta-analysis. We will conduct crossdisorder analyses to identify loci that cut across diagnostic categories by leveraging existing high-quality results for anorexia nervosa, major depressive disorder, schizophrenia, bipolar disorder, and other psychiatric and metabolic phenotypes. We will use advanced methods [40] to compute SNP heritabilities and genetic correlations across psychiatric and metabolic traits. We will calculate metabolic and psychiatric trait & disorder polygenic scores (PGS) using PRSice, (http://www.prsice. info). A leave-one-sample-out process will be carried out to calculate BN/BED PGSs. The calculated PGS will be the weighted numbers of risk alleles carried by each case and control. This aim will illuminate, from a fundamental perspective, the genetic architecture of BN/BED and its relation to other psychiatric disorders and metabolic conditions. We will compare taxonomic composition and diversity of the gut microbiota for BEGIN participants, compare BN with BED, and both to a reference control panel. We will control for multiple covariates in all analyses (e.g., obesity). We can now rapidly do GWAS on multiple phenotypese.g., GWAS for 22 K transcriptomic, 8 K proteomic, or 1 K metabolomic measures. (1) We will adapt and extend these methods to evaluate host genomicmicrobiota interactions by conducting~15 K GWAS for species-level microbial measures while controlling for multiple comparisons. (2) We will generate microbiome "modules", clusters of species with high intra-group correlations and low inter-group correlations. We will then do a GWAS for these modules. (3) For all analyses, we will pay particular attention to the genomic regions highlighted in the prior literature (e.g., MHC, autoimmunity, gut barrier, inflammatory bowel disease). (4) We will utilize publicly available databases of summary statistics across a range of psychiatric, personality, metabolic, and physical activity phenotypes and employ both trait-specific polygenic scores (PGS) and multi-polygenic scores (MPS) to predict outcomes. We will use a novel MPS approach developed by collaborator Breen and colleagues [41], that exploits genetic correlations between the outcome trait and a multitude of traits by using the joint predictive power of multiple polygenic scores in one regression model. We will select relevant GWAS from a centralized repository of summary statistics to predict BN, BED, severity, treatment outcome. Using repeated cross-validation, we will train and validate the prediction models using elastic net regularized regression, which is a multiple regression model suited to deal with a large number of correlated predictors while preventing overfitting [42]. We will then add microbiota and phenotyping variables into the model to improve predictive accuracy. Digital longitudinal phenotyping aims Our dynamic systems approach capitalizes on a combination of the passive and active data collection to address all three of the longitudinal phenotyping aims. Each stable state can be thought of as having homeostatic properties that are reflected in associations between different levels of derivatives (i.e., change in value with respect to time). For example, the relationship between changes in heart rate from one moment to the next and values of heart rate at the previous moment characterize how heart rate fluctuates homeostatically about a "set point." This set point represents the heart rate value to which the individual's body returns when the person is at rest [43]. Not only does this association characterize the homeostatic heart rate value itself but also the rate of return to the set point when a person's heart rate is perturbed (i.e., experiencing distress prior to a binge/ purge episode, physical load creating during exercise, etc.). Higher order derivatives and accounting for more variables simultaneously allows for testing more complex homeostatic patterns (e.g., cycles), while including this concept of rate of return to set point (i.e., systemic stability). Aim 2 will be tested by first depicting the dynamics that lead up to a binge or purge event in a multilevel model. Aim 3 will be addressed by depicting the dynamics once a binge or purge event has occurred. In this case, analyses will focus on the 2 h after binge/purge events (but not within an hour of a future event), again modeling changes in heart rate and steps as a function of current levels in heart rate and steps. Aim 4 will require depicting each instance in time in terms of risk for being in one of the temporal states associated with subsequent binge eating and/or purging. To do so, we will utilize the posterior probabilities from a latent mixture model where each pattern is differentiated by associations amongst different levels of change. Mixture modeling is a taxonomic approach where timepoints within and between individuals can be grouped together as a function of a model. In this case, the model will differentiate groups of data as a function of the dynamic properties. To help ensure reproducibility, the sample will be split in half with each half used as confirmation on the other half generating competing models. Under large data circumstances such as these, rather than power, the primary concern is a combination of overfitting and gaining a proper gauge of an effect. Generating competing models allows each of the samples to function as confirmation of the other with the better fitting model on both samples providing the more generalizable solution. Discussion As a multi-pronged investigation, BEGIN will have broad impact across various dimensions in the eating disorders field. First, in the biological domain, BEGIN will allow us to identify genetic and gut microbiota contributors to disorder risk and maintenance and identify genomic, enteric microbes, and behavioral predictors of outcome. Second, in the behavioral domain, BEGIN will allow us to build algorithms that predict behavioral events (e.g., impending binges or purges) to enable realtime intervention via wearable technology. We intend this to be a transformative study in the field of eating disorders. Through deep longitudinal phenotyping via the Apple Watch, we designed BEGIN to rapidly accelerate progress toward personalized precision medicine for BN and BED. Advances in eating disorders treatment have been slow and incremental. In our Agency for Healthcare Quality and Research review of treatments for BED [44], we noted that the evidence base was challenged by small samples and single studies introducing small variations on core therapeutic approaches with little or no additive efficacy. Wearable sensors, such as the Apple Watch with the adapted Recovery Record app, offer us the opportunity to develop a transformative improvement in BN and BED treatment. BN and BED are model disorders with discrete and measurable pathognomonic unhealthy behaviors. By applying dynamical systems models to the passive and active data that we collect, we will bypass historical onesize-fits-all CBT interventions for BN and BED and immediately enter the era of personalized interventions for eating disorders. Not only will we be able to build models that predict binge and purge episodes within the acute phase of the illness, but personalized extensions of these models will allow us to identify and alert individuals to impending slips and relapses after recovery. Although traditional CBT interventions that rely on in-session retrospective recall will never be entirely obsolete, we expect that the just-in-time approach afforded by our dynamical systems models will render ours a central feature in the future treatment of BN/BED. Results from BEGIN will set the stage for subsequent studies in which we will have achieved the ability to discriminate across types of events (e.g., exercise, meal, binge, purge) that will allow us to build in accurate push notifications when an individual's passive and active data signal an impending binge or purge-truly tailoring treatment and delivering it in-the-moment. Moreover, Recovery Record already has a clinician interface, and we predict that we will be able to incorporate our models into provider interfaces such that clinicians will be able to view and interact with the alerts that emerge, thus supporting the provision of data-informed care. Ultimately, we foresee that this study will advance both cognitive-behavioral approaches to understanding and treating eating disorders and dynamical systems theory of behavior change to incorporate both intensive longitudinal behavioral and physiological data. Although our focus is on eating disorders, we intend our models to be readily adaptable for other psychiatric (and somatic) conditions that have identifiable measurable indices in order to usher us more rapidly toward individualized interventions that attend to the psychology, the biology, and the dynamic environment of the individual. Availability of data and materials Our liberal data and analysis sharing principles will make genomic, microbiota, and phenotypic data and scripts widely available for access by other scientists to maximize utility of our investigation. The datasets generated and/or analyzed during the current study will be available in the National Data Archive (https://nda.nih.gov/) and on Open Science Framework (https://osf.io/). DOI https://doi.org/10.17605/OSF.IO/ KJ7WR. DNA samples will be available from the NIMH Repository and Genomics Resource (https://www.nimhgenetics.org/order-biosamples/how-to-orderbiosamples). Ethics approval and consent to participate BEGIN was approved by the University of North Carolina Biomedical Institutional Review Board (IRB) Protocol # 17-0242. All participants provided informed written online consent to participate. The electronic consent process was approved by the IRB. Consent for publication Not applicable.
5,672.8
2020-06-16T00:00:00.000
[ "Medicine", "Psychology", "Biology" ]
Results of a Search for Paraphotons with Intense X-ray Beams at SPring-8 A search for paraphotons, or hidden U(1) gauge bosons, is performed using an intense X-ray beamline at SPring--8."Light Shining through a Wall"technique is used in this search. No excess of events above background is observed. A stringent constraint is obtained on the photon--paraphoton mixing angle, $\chi<8.06\times 10^{-5}\ (95%\ {\rm C.L.})$ for $0.04\ {\rm eV} Introduction Paraphotons, or hidden sector photons are gauge bosons of hypothetical U (1) symmetry. Many extensions of the Standard Model predict such a symmetry [1]. Some of them also predict tiny mixing of paraphotons with ordinary photons through very massive particles which have both electric and hidden charge [2]. This effective mixing term induces flavor oscillations between paraphotons and ordinary photons [3]. With this oscillation mechanism, a high sensitive search can be done with a method called "Light Shining through a Wall (LSW)" technique [4], in which incident photons oscillate into paraphotons that are able to pass through a wall and oscillate back into photons. Recently, a detailed theoretical calculation has been performed for axion LSW experiment [5]. Since both axion-and paraphoton-conversion are described as the same quantum oscillations, the conversion probability for axions can be interpreted as that of paraphotons by replacing parameters from βω m 2 to χ in Eq. (29) in [5]. After propagation in vacuum for length L, the probability of converting a paraphoton into a photon (or vice versa) is given by, where χ is the mixing angle, m γ ′ is the mass of the paraphoton, and ω is the energy of photon. For low mass region (m γ ′ ≪ ω), it becomes a well-known expression of a neutrino-like oscillation; p γ↔γ ′ (L) = 4χ 2 sin 2 (m 2 γ ′ L/4ω). Searches have been performed with this LSW technique by using optical photons [6] or microwave photons [7], without any evidence. Useful summary papers are available (see e.g. [8]). For an axion-LSW search, an experiment using X-rays has been performed at ESRF [9]. In this letter, we report a new search for paraphotons with the LSW method. We use an intense X-ray beam created by a long undulator at SPring-8 synchrotron radiation facility to search paraphotons whose mass is in the (10 −1 -10 4 ) eV region. Experimental Setup BL19LXU [10] beamline at SPring-8 ( Fig. 1) is used for X-ray source. A 30-m long undulator is placed on the electron storage ring as shown in Fig. 1. A bunch length of electrons in the storage ring is 40 ps, and a bunch interval is 23.6 ns. Structure of a X-ray beam represents the bunch structure of electrons, but we regard it as a continuous beam because time resolution of X-ray detector is larger than this structure. An energy of the X-ray beam is tunable between 7.2 and 18 keV by changing a gap width of the undulator. Higher energy of its 3rd harmonics (21.6 ∼ 51 keV) is also available. X-ray beam is monochromated with a Si(111) double crystal monochromator to the level of ∆ω/ω ∼ 10 −4 . A reflection angle is determined from Bragg condition, and is typically ∼ 100 mrad for energies we use. A beam size is about 1 mm, and a vertical profile (ρ(x)) is measured with a slit with 10 µm pitch. Shape of ρ(x) is similar to Gaussian whose FWHM is 383 µm. From the monochromator, the X-ray beam is guided through vacuum tubes, whose length is about 3.5 m. Tubes are evacuated better than 4 × 10 −5 Pa, and a double mirror is placed at the downstream edge of the tube. These mirrors are adjusted for the total reflection, and their reflection angle is tuned at 3.0 mrad (or 2.0 mrad) during our search (only at 26 keV search). They serve as a beam-pass filter, since only X-ray beams satisfying a severe condition of total reflection are bounced up and the other off-axis background photons are blocked. The X-ray beam changes its path with these mirrors and only the reflected beam is selected with a slit, and guided to the X-ray detector. Two beam shutters are placed in the beamline. Main Beam Shutter (MBS) is placed just before the monochromator, and DownStream Shutter (DSS) is placed between the monochromator and the mirrors. Photon changes into paraphoton in a vacuum tube between the monochromator and DSS, and then changes back inversely in the region between DSS and the mirrors. Each length at the beam center is (277 ± 2) cm and (65.4 ± 0.5) cm, respectively. A germanium detector (Canberra BE2825) is used to detect X-ray signal. A diameter and thickness of its crystal is 60 mm and 25 mm, respectively. Signal of Ge detector is shaped with an amplifier (ORTEC 572) and recorded by a peak hold ADC (HOSHIN C-011). Energy resolution of the detector is measured with 55 Fe, 68 Ge, 57 Co, and 241 Am sources, and typical energy resolution at 10 keV is 0.17 keV (σ: standard deviation). Absolute efficiencies of the X-ray detector (ǫ) are also measured by the same sources. Measured efficiencies are consistent with GEANT4 Monte Carlo results, which includes all attenuations in the air, carbon composite window (thickness= 600 µm) of the detector, and surface dead layer (thickness = (7.7 ± 0.9) µm) of the germanium crystal. The detector is shielded by lead blocks whose thickness is about 50 mm except for a collimator on the beam axis whose hole diameter is 30 mm, much larger than the X-ray beam size. The position of the collimator and the germanium crystal against the beam is adjusted by using a photosensitive paper which is sensitive to the X-ray. After the monochromator reaches thermal equilibrium, beam flux becomes stable. Absolute flux of the X-ray beam and its stability are monitored by a silicon PIN photodiode (Hamamatsu S3590-09, thickness = 300 µm). This photodiode is inserted in front of the collimator of the lead shield, and DSS is opened for the flux measurement. During this measurement, the collimator hole is closed to avoid the radiation damage to the germanium detector. The energy deposited on the PIN photodiode is calculated using its output current and the W-value of silicon (W = 3.66 eV). Fraction of the X-ray energy deposition in the PIN photodiode is computed with GEANT4 simulation for each energy. To correct the saturation effect of the PIN photodiode, thin aluminum foils are inserted before the photodiode to attenuate X-ray flux. Attenuation coefficient of aluminum is also checked by GEANT4 simulation. The flux can be measured with an accuracy of less than 5%. Measurement and Analysis A paraphoton search is performed from 14th to 20th June, 2012. 9 measurements are performed with different X-ray energies from 7.27 keV to 26.00 keV. Results are summarized in Tab. 1. Beam intensities (I) are monitored every 3-4 hours by the PIN photodiode as described in the previous section. Time drifts of the beam flux (< 10%) are confirmed only at the beginning of the measurement since, due to heavy heat load, it takes about 30 minutes for experimental setup to become thermally stable. Fluxes which get well-stabilized in the thermal equilibrium are listed and used for the analysis. Energy calibration of the detector is also performed every 3-4 hours with a 57 Co source. In this paper, all error values represent 1 sigma. BG spectrum (Fig. 2 (a)) is measured from 16th to 17th June with MBS closed. The other setup including the lead shields are completely the same as in the paraphoton searches. Total livetime of BG measurement is 1.6 × 10 5 s. The BG rate at 7.00 keV is (10.9 ± 0.3) × 10 −3 s −1 keV −1 and gradually decreases toward (4.6 ± 0.2) × 10 −3 s −1 keV −1 at 26.00 keV. No apparent structure is observed in the measured BG spectrum except for 10.6 keV and 12.6 keV, Xrays from the lead shields. We define signal region as inside ±2σ around the beam energy ω. Since signal regions are not overlapped among all measurements, the BG spectrum is commonly used for all subtractions (Fig. 2 (b)). The subtracted signal rates (∆N ) are also shown in Tab. 1, and no significant excess is observed for all 9 measurements. Using these rates, we set upper limits on signal rates of measurements. Gaussian distributions are assumed from center values and the standard deviations of ∆N , and 95% C.L. positions in the physical (i.e. positive) regions are set as a signal upper limit (∆N 95 ). Finally, the upper limits on the LSW probability (P 95 ) are obtained by ∆N 95 /ǫI. To translate P 95 to the limit on the mixing parameter χ, we need to consider ρ(x) of the X-ray beam. Since the incident angles of the beam into the second crystal of the monochromator and the first mirror are very shallow, ρ(x) affects the lengths of the oscillation regions. As a result, these lengths are smeared by ρ(x), and the LSW probability is written as, Here, L 1 (x) is the length of photon → paraphoton oscillation region modified by the vertical position, and L 2 (x) is that of the re-oscillation region. The integration is numerically calculated for each ω as a function of m γ ′ , and P 95 is translated to the limit on χ. Figure 3 shows 95% C.L. limit obtained using a data set of 9.00 keV measurement, and upper side of the line is excluded. The limit is smoothed by the smearing effect of ρ(x) and becomes constant for masses from 5 eV up to around 9 keV (labeled as "(b)"). Limit oscillations in the region (a) are ruled out by the combination of 9 measurements. Combined results are obtained by the described procedure using χ 4 distributions and multiplying each others. 95% C.L. upper limit of the combined result is shown in Fig. 4 with other results. The worst value of χ worst = 8.01×10 −5 appears at 1.39 eV. Systematic errors on energy scale of the detector and oscillation region lengths, including contribution from the uncertainty of ρ(x) (∆L < 0.5 mm), are estimated by varying the parameters and obtained to be ∆χ worst /χ worst = +0.52 −0.15 %. Conservatively, χ worst + ∆χ worst represents our final result, χ < 8.06 × 10 −5 (95% C.L.). This result is valid for masses up to 26 keV, the maximum beam energy of our search. Our result is the most stringent for masses around eV region as a terrestrial search. Conclusion A paraphoton search is performed at BL19LXU beamline in SPring-8 synchrotron radiation facility. A double oscillation process, "photons oscillating into paraphotons and oscillating back into photons", is assumed, and photons passing through a wall are searched. No such photons are observed, and a new limit on the photon-paraphoton mixing angle, χ < 8.06 × 10 −5 (95% C.L.) is obtained for 0.04 eV < m γ ′ < 26 keV. Figure 4: Obtained 95% C.L. limit on the paraphoton mixing angle compared with other laboratory experiments. Rydberg is a limit from the measurement of Rydberg atoms [11], Coulomb is from the Coulomb low confirmation [12], and BFRT, BMV, GammeV, LIPPS, and ALPS are from LSW experiments using optical laser [6].
2,641.8
2013-01-28T00:00:00.000
[ "Physics" ]
Tuning of heat and charge transport by Majorana fermions We investigate theoretically thermal and electrical conductances for the system consisting of a quantum dot (QD) connected both to a pair of Majorana fermions residing at the edges of a Kitaev wire and two metallic leads. We demonstrate that both quantities reveal pronounced resonances, whose positions can be controlled by tuning of an asymmetry of the couplings of the QD and a pair of MFs. Similar behavior is revealed for the thermopower, Wiedemann-Franz law and dimensionless thermoelectric figure of merit. The considered geometry can thus be used as a tuner of heat and charge transport assisted by MFs. We investigate theoretically thermal and electrical conductances for the system consisting of a quantum dot (QD) connected both to a pair of Majorana fermions residing at the edges of a Kitaev wire and two metallic leads. We demonstrate that both quantities reveal pronounced resonances, whose positions can be controlled by tuning of an asymmetry of the couplings of the QD and a pair of MFs. Similar behavior is revealed for the thermopower, Wiedemann-Franz law and dimensionless thermoelectric figure of merit. The considered geometry can thus be used as a tuner of heat and charge transport assisted by MFs. Majorana fermions (MFs) are particles that are equivalent to their antiparticles. The corresponding concept was first proposed in the domain of high-energy physics, but later on existence of the elementary excitations of this type was predicted for certain condensed matter systems. Particularly, MFs emerge as quasiparticle excitations characterized by zero-energy modes 1,2 appearing at the edges of the 1D Kitaev wire 3-7 . Kitaev model is used to describe the emerging phenomena of p-wave and spinless topological superconductivity. Kitaev topological phase can be experimentally achieved in the geometry consisting of a semiconducting nanowire with spin-orbit interaction put in contact with s-wave superconducting material and placed in external magnetic field 8,9 . Other condensed matter systems were also proposed as candidates for the observation of MFs. They include ferromagnetic chains placed on top of superconductors with spin-orbit interaction 10,11 , fractional quantum Hall state with filling factor ν = 5/2 12 , three-dimensional topological insulators 13 and superconducting vortices [14][15][16] . MFs residing at the opposite edges of a Kitaev wire are elements of a robust nonlocal qubit which appears to be immune to the environment decoherence. This attracted the interest of the researchers working in the domain of quantum information and transport, as systems with MFs [17][18][19] can be in principle used as building blocks for the next generation of nanodevices 20,21 , including current switches 20 and quantum memory elements 21 . At the same time, similar systems were proposed as thermoelectric nanodevices 22-25 . In this work, following the proposals of thermoelectric detection of MF states [22][23][24][25] , we explore theoretically zero-bias thermal and electrical transport through one particular geometry consisting of an individual QD coupled both to a pair of MFs and metallic leads as shown in the Fig. 1(a). The MFs reside at the edges of a topological U-shaped Kitaev wire, similar to the case of ref. 19 . The QD coupling to the MFs is considered to be asymmetric, while coupling to the metallic leads is symmetric, and MFs are supposed to overlap with each other. The results of our calculations clearly show that thermoelectric conductance, thermopower, Wiedemann-Franz law 26 and dimensionless thermoelectric figure of merit (ZT) as function of the QD electron energy demonstrate resonant behavior. Moreover, the position of the resonance can be tuned by changing the coupling amplitudes between the QD and the MFs, which allows the system to operate as a tuner of heat and charge assisted by MFs. Model For theoretical treatment of the setup depicted in the Fig. 1(a) where the electrons in the leads α = H, C (for hot and cold reservoirs, respectively) are described by the operators c k † α (c αk ) for the creation (annihilation) of an electron in a quantum state labeled by the wave number k and energy ε k . For the QD † d 1 (d 1 ) creates (annihilates) an electron in the state with the energy ε 1 . The energies of both electrons in the leads and QD are counted from the chemical potential μ (we consider only the limit of small source-drain bias, thus assuming that chemical potential is the same everywhere). V stands for the hybridization between the QD and the leads. The asymmetric coupling between the QD and MFs at the edges of the topological U-shaped Kitaev wire is described by the complex tunneling amplitudes λ A and λ B . Introduction of an asymmetry in the couplings can account for the presence of the magnetic flux which can be introduced via Peierls phase shift 27 . ε 2 stands for the overlap between the MFs. Without loss of generality, we can put: λ = MFs, and introduce an auxiliary nonlocal fermion with integer n = 0, 1, 2, … corresponding to the total flux through the ring of Fig. 1. This parameter is experimentally tunable by changing the external magnetic field. This fact gives certain advantages to our proposal with respect to the previous works with asymmetric couplings between a single QD and a pair of MFs at the ends of a topological Kitaev wire [28][29][30][31] . According to ref. 32 the parameter ε 2 describing the overlap between the MFs depends on magnetic field in an oscillatory manner, the amplitudes A t 2 demonstrate the same behavior (see Sec. III-A of ref. 30 ) and thus external magnetic field affects not only the relative phase between λ A and λ B but their absolute values as well. To fulfill the condition |λ B | < |λ A | one should place the QD closer the MF η A than to the MF η B . We map the original Hamiltonian into one where the electronic states d 1 and d 2 are connected via normal tunneling t and bounded as delocalized Cooper pair, with binding energy Δ: This expression represents a shortened version of the microscopic model for the Kitaev wire corresponding to the Kitaev dimer (see Fig. 1(b)). As it was shown in the refs 33 and 34 this model allows clear distinguishing between topologically trivial and Majorana-induced zero-bias peak in the conductance. In what follows, we use the Landauer-Büttiker formula for the zero-bias thermoelectric quantities  n 22,23 : The sketch of the geometry we consider. Topological U-shaped Kitaev wire with a pair of MFs η A and η B is placed in contact with a QD, which is connected as well to two metallic reservoirs. The coupling of the QD to the MFs is asymmetric and is characterized by tunneling matrix elements λ A and λ B , while coupling to the metallic leads is symmetric and is characterized by the tunneling matrix element V. ε 2 denotes the coupling between two MF states. (b) Equivalent auxiliary setup (Kitaev dimer) resulting from the mapping of the original system onto the system with nonlocal fermion residing in QD 2 . t is tunneling matrix element between the QDs 1 and 2, Δ is the binding energy of the Cooper pair delocalized between them. 1 1 being retarded Green's function for the QD in the energy domain ε, obtained from the Fourier transform   d e corresponds to the Green's function in time domain τ, expressed in terms of the Heaviside function θ(τ) and thermal density matrix ρ for Eq. (1). Experimentally measurable thermoelectric coefficients can be expressed via 0  , 1  and 2  as: 1 0 for the electrical and thermal conductances and thermopower, respectively (T denotes the temperature of the system). We also investigate the violation of Wiedemann-Franz law, given by For Eq. (4), we use equation-of-motion (EOM) method 36 summarized as follows: (1) and (2) is quadratic, the set of the EOM for the single particle Green's functions can be closed without any truncation procedure 37 . We find the following four coupled linear algebraic equations: † where Σ = −iΓ is the self-energy of the coupling with the metallic leads This gives the Green's function of the QD: MFs 2 (21) and (22) allows us to conclude that the peak values of the electric conductance are reached when S = 0 for which d d / 0  ε = which happens when Comparison of the Eqs As we will see below, fulfillment of this condition corresponds to the presence of an electron-hole symmetry in the system. Note that as ε 2 enters in the denominator of the Eq. (25), even slight differences between t and Δ will be enough to change drastically the position of the resonance if hybridization between the MFs is small. Results and Discussion In our further calculations, we scale the energy in units of the Anderson broadening π δε ε Γ = ∑ − V 2 ( ) k k 2 35 and take the temperature of the system k B T = 10 −4 Γ. The Anderson broadening Γ defines the coupling between the QD and the metallic leads, which is assumed to be symmetrical for a sake of simplicity. We start our analysis from the case when only a single MF (η A ) is coupled to the QD. In terms of the amplitudes t,Δ this corresponds to t = Δ. To be specific, we fix t = Δ = 4Γ. Looking at Eq. (2), we see that the terms d d H c enter into Hamiltonian with equal weights, and thus we are in the superconducting (SC)-metallic boundary phase. Figure 2(a) shows the electrical conductance G e 2 0 =  scaled in units of the conductance quantum G 0 = e 2 /h as a function of the QD energy level ε 1 , for several coupling amplitudes ε 2 between the MFs. Note that, if MFs are completely isolated from each other (ε 2 = 0), the conductance reveals a plateau with G = G 0 /2 whatever the value of ε 1 (black line), and similar trend is observed in the thermal conductance shown in the Fig. 2(b). The effect is due to the leaking of the Majorana fermion state into the QD 38 . The MF zero-mode becomes pinned at the Fermi level of the metallic leads, but within the QD electronic-structure. With increase of the coupling between the wire and the QD, the MF state of the Kitaev wire leaks into the QD. As a result, a peak at the Fermi energy emerges in the QD density of states (DOS), while in the DOS corresponding to the edge of the wire the corresponding peak becomes gradually suppressed. Consequently, the QD effectively becomes the new edge of the Kitaev wire. This scenario was reported experimentally in the ref. 9 . To get resonant response of the thermoelectric conductances one should consider the case ε 2 ≠ 0, corresponding to the splitting of the MF zero-bias peak. The resonant behavior of G and K can be understood as arising from the presence of an auxiliary fermion d 2 , in the Hamiltonian [Eq. (2)], whose energy ε 2 is now detuned from the Fermi level (see inset of Fig. 2(b)). In this case, the regular fermion state instead of the corresponding half-fermion provided by MF η A gives the main contribution to the charge and heat current. In this scenario, filtering of the electricity and heat emerges: the maximal transmission occurs at ε 1 = 0. Our Fig. 2(a,b) recover the findings of Fig. 5(a) in ref. 23 . Our work, however, have an important novel dimension: we demonstrate that even small deviations of the system from the SC-metallic boundary phase which can be achieved by the control of the asymmetry of the couplings allows realization of the efficient tuners of electricity and heat. This effect is shown in the Fig. 3(a,b). As one can see, even small detuning of the coefficient t from the value t = Δ leads to substantial blueshift (for the case t > Δ) or redshift (for the case t < Δ) of the conductance resonances. Such sensitivity is a direct consequence of the Eq. (25) defining the position of the resonances. To shed more light on the effect of the tuning of charge and heat transport in the system, we make a plot of the quantity T G ( ) appearing in the Eqs (3) and (4), as function of ε 1 and ε, see Fig. 4(a-d). Figure 4(a) corresponds to the case t = Δ, ε 2 = 0. One can recognize a "cat eye"-shaped central structure, corresponding to the vertical line at ε = 0. Everywhere along this line  = constant, which according to the Eq. (21) means that changes in ε 1 do not affect the conductance. This corresponds well to the conductance plateau in the Fig. 2. If ε 2 is finite, the "cat eye" structure transforms into a double-fork profile as it is shown in the Fig. 4(b). Note that in this case, movement along the vertical line corresponding to ε = 0 lead to the change of the function  , which according to the Eq. (21) leads to the modulation of the conductance. The maximal value is achieved at the point ε 1 = 0, which corresponds well to the resonant character of the curves shown in the Fig. 2. The introduction of the finite value of ε 2 and the asymmetry of the coupling between the QD and MFs (t ≠ Δ) leads to the shifts of the double-fork structure either upwards by ε 1 scale for t > Δ (panel (c), blueshift of the resonant curves in the Fig. 3) or downwards by ε 1 scale for t < Δ (panel (d), redshift of the resonant curves in the Fig. 3). It should be noted that similar results to the transmittance were reported both theoretically (ref. 30 ) and experimentally (ref. 31 ) for the geometry of a linear Kitaev wire with a QD attached to one of its ends placed between source and drain metallic leads. Differently from the case considered in our work, the authors account for the spin degree of freedom and particularly for ref. 31 , they evaluate the dependence of the conductance on the energy level of the QD and magnetic field, while we further analyze ε and asymmetry of couplings dependencies relevant for the understanding of the tuner regime. Despite the distinct geometry and spinless regime, our results and those reported in refs 30,31 are in good correspondence with each other, thus validating the mechanism pointed out in refs 30,32 of field-assisted overlapping between MFs and tunnel-couplings with the QD. The possibility to tune electric and thermal conductances opens a way for tuning the thermopower (S), Wiedemann-Franz law (WF) and dimensionless figure of merit (ZT) as it is shown in the Fig. 5(a-c). In the Fig. 5(a) the dependence of the thermopower on ε 1 is demonstrated. If t > Δ, at ε 1 = 0, S > 0 and the setup behaves as a tuner of holes. On the contrary, for t < Δ, at ε 1 = 0, S < 0 and the setup behaves as a tuner of electrons. Figure 5(b,c) illustrate the violation of WF law and the behavior of the dimensionless thermoelectric ZT, respectively. Note that ZT does not reach pronounced amplitudes, i.e., ZT < 1 26 , even for finite values of G and K as dependence on S 2 prevails if we take into account Eq. (21) into Eq. (10). Conclusions In summary, we considered theoretically thermoelectric conductances for the device consisting of an individual QD coupled to both pair of MFs and metallic leads. The charge and heat conductances of this system as functions of an electron energy in the QD reveal resonant character. The position of the resonance can be tuned by changing the degree of asymmetry between the QD and the MFs, which allows us to propose the scheme of the tuner of heat and charge. Thermopower, Wiedemann-Franz law and the figure of merit are found to be sensitive to the asymmetry of the coupling as well. Our findings will pave way for the development of thermoelectric nanodevices based on MFs.
3,817.8
2016-11-14T00:00:00.000
[ "Physics" ]
Tailoring interference and nonlinear manipulation of femtosecond x-rays We present ultrafast x-ray diffraction (UXRD) experiments on different photoexcited oxide superlattices. All data are successfully simulated by dynamical x-ray diffraction calculations based on a microscopic model, that accounts for the linear response of phonons to the excitation laser pulse. Some Bragg reflections display a highly nonlinear strain dependence. The origin of linear and two distinct nonlinear response phenomena is discussed in a conceptually simpler model using the interference of envelope functions that describe the diffraction efficiency of the average constituent nanolayers. The combination of both models facilitates rapid and accurate simulations of UXRD experiments. . Experimental -2 scans (gray bullets) of the SRO/STO SL. The broken lines show the calculated single-layer envelope functions (scaled for clarity), the black solid line is the DL envelope function (scaled by the number of DL squared), and the red solid line is the resulting SL diffractogram of the LCDX at (a) t < 0 and (b) t = 1.6 ps after optical excitation with a fluence of 36.8 mJ cm −2 . The arrows mark the SL peaks considered in figure 3. shape of these diffraction curves and provides a fundamental understanding of transient changes upon photoexcitation by femtosecond laser pulses. As the thickness of the individual layers in both SLs is much smaller than the extinction depth ξ of the x-rays, the corresponding diffractograms are essentially the Fourier transform of their electron densities. Figures 1 and 2 show the square modulus of the diffracted x-ray amplitude A M (q) (A I (q)) for a single metallic (insulating) layer of the respective sample as a red dashed (blue dotted) line. These curves match a sinc 2 function (the Fourier transform of a homogeneous slab), and we will refer to such curves as envelope functions. The width q of such envelope functions is inversely proportional to the real-space thickness d of the respective layer and their center position q env encodes the average strain of that single layer. The envelope of one DL, |A DL | 2 = |A M + A I | 2 (black line in figures 1 and 2) accounts for interference of the complex single-layer amplitudes 7 . The DL envelope is scaled by the respective number of DL squared. Clearly, it determines the intensity of the observed SL Bragg reflections since the SL Bragg peaks touch the DL envelope in figures 1(a) and 2(a). In other words, the observed intensity I (q SL , t) of a particular SL reflection at q SL can be estimated from the relation I ∝ |A DL | 2 . The SL Bragg peaks thus 'sample' the DL envelope at discrete wavevectors that are selected by the Laue condition q SL = n · 2π d SL = n · g SL , where g SL is the reciprocal lattice vector corresponding to the SL period d SL = d M + d I and n ∈ N. The single-layer envelope functions themselves have significant intensity only in the q-range around the bulk Laue conditions = 2π c av = 2π(n M + n I )/(n M c M + n I c I ) of the so-called zero-order SL peak (ZOP) corresponding to the average lattice constant c av in one DL [16]. Here n M and n I correspond to the number of unit cells in the metallic and insulating layers, respectively. We can now use the above introduced EM to predict the general features of transient changes of diffractograms after laser-pulse excitation such as presented in figure 1. The ultrafast deposition of the excitation energy in the metallic layers of the SL triggers their impulsive expansion [13] which shifts the red dashed envelope to smaller q values. The concomitant compression of STO shifts the blue dotted envelope to larger q values (compare the envelopes in panels (a) and (b) of figures 1 and 2). The magnitude of the envelope shifts is determined by the amplitude of this collective, spatially and temporally periodic lattice motion also referred to as SL phonon mode [14,17]. As a consequence, the DL envelope function and thus the SL Bragg peak intensities are altered. Eventually, the entire SL will expand within the time T exp = D/v SL , where D and v SL are the total SL thickness and the sound velocity in the SL, respectively. For small time delays t T exp , however, the SL period remains approximately constant and the SL Bragg peak positions q SL do not change [18]. Here, we exclusively focus on these short-time dynamics. The UXRD experiments were performed at the FEMTO-slicing beamline of the Swiss Light Source (SLS), providing a time resolution of 140 ± 30 fs [19]. The samples were excited by ∼120 fs pump pulses at 800 nm wavelength where the optical penetration depths ξ SRO ≈ 52 nm and ξ LSMO ≈ 90 nm generate an exponentially decaying stress pattern along the SL stack that is correctly accounted for in the LCDX [14,20,21]. As an example, the gray bullets in figures 1(a) and (b) show the measured -2 scans of the STO/SRO SL before and 1.6 ps after excitation, respectively, encompassing four SL reflections (−1 to +2). We also recorded the intensity of selected SL Bragg peaks as a function of time delay for different pump fluences. The symbols in figures 3 and 4 illustrate the strong modulations of the relative intensity change [I (t) − I 0 ]/I 0 where I (t) is the measured x-ray intensity at time delay t and I 0 is the measured unpumped signal. Here, it is directly verified that the maximum expansion of the metallic layers of both the SRO/STO and the LSMO/STO SL is reached after 1.6 ps. In the following, we discuss the simulation of UXRD data. We highlight the linear and nonlinear response of distinct Bragg reflections of the two SLs, starting with the ZOP of the SRO/STO SL. The DL envelope of the excited SRO/STO SL in figure 1(b) matches the experimental SL peak intensities very well, if we assume a homogeneous SRO expansion of 1.3% for a laser fluence of 36.8 mJ cm −2 . Only the +1 SL peak close to the substrate peak is overestimated by the EM 9 . If we use the LCDX, we are able to properly calculate the x-ray 9 The overestimation of the +1 peak remains even if the complete SL including the substrate is simulated according to the EM (see [13]). This discrepancy between the EM and the exact LCDX is thus due to the inhomogeneous excitation density along the SL stack. The photoinduced structure dynamics discussed above lead to a strong decrease of the ZOP intensity with increasing SRO strain, as can be seen in figure 3(a) [13]. According to the EM, this is because the ZOP is governed by the steep flanks of the mutually departing single-layer envelopes. The inset of figure 3(a) compares the ZOP intensity at 1.6 ps as measured (black bullets) and as predicted by the EM (black line) and LCDX (red line). In addition, the contributions of the metallic (red dashed) and insulating (blue dotted) layers are indicated. The EM already yields very good qualitative agreement and illustrates the wide range of linearity up to ∼1% average SRO strain. Notably, the LCDX precisely matches the measured ZOP intensity at 1.6 ps (inset). Furthermore, it even accurately reproduces the recorded time scans in figure 3(a). For the highest pump fluence, we deduce an average SRO strain of 1.45% at 1.6 ps. In the case of SRO/STO ZOP, the linear regime is intrinsically limited because at a certain strain level the ZOP intensity has to vanish, which is indeed the case at about 2% SRO strain. At this point, the first-order minima of both single-layer envelopes approach q (0) (cf inset of figure 3(a)). In addition to this trivial deviation from linearity, other nonlinear x-ray responses could be identified. As seen in figure 1(a), the +2 SL peak is nearly forbidden in the stationary SL because it is enclosed by the first minima of the SRO and STO layer envelopes [13]. Panel (b) shows that this peak exhibits a strongly enhanced intensity at 1.6 ps due to the structural dynamics. The inset of figure 3(b) indicates the highly nonlinear dependence of this reflection on the SRO 7 expansion as predicted by the EM (black line). A small strain initially suppresses the peak intensity as it completely shifts the minima of the single-layer envelopes to q (+2) . Only above a threshold strain of ∼0.5% does this peak attain considerable intensity, mainly due to the increase of the STO envelope function (blue dotted line). A comparison of the experimental transient intensity of the +2 SL peak with the LCDX calculations presented in figure 3(b) again reveals very good agreement. As the SL phonon amplitude builds up, the intensity first remains unchanged within the signal-to-noise ratio of the experiment up to 800 fs, then rapidly increases to its maximum at about 1.6 ps and subsequently drops back to zero where it again remains for 800 fs. This behavior is repeated for the next periods with lower amplitude according to the energy loss of the SL phonon [14]. This 'gating' of x-ray Bragg reflectivity has an FWHM duration of 900 fs around the maximum at 1.6 ps. Although the EM covers all essential features of the +2 SL peak response (nonlinearity, threshold behavior), the inset of figure 3(b) indicates that the EM predictions quantitatively deviate from the precise LCDX simulations. As a further test of our models, we present experimental and numerical results for the LSMO/STO SL, including similar linear and nonlinear effects. In addition, however, a transient destructive interference of the diffracted components of the individual layers is identified. The -2 scan of the SL is shown in figure 2. Again, the ZOP of the LSMO/STO SL is located between the individual envelope functions, however this time with interchanged envelope positions of the metallic and insulating layers. According to the EM, this should lead to an increase of the ZOP intensity due to approaching envelope maxima. This is confirmed by the UXRD measurements reported in figure 4(a), which shows the response of the ZOP. The corresponding inset reveals that the EM predicts a linear increase of the ZOP intensity at 1.6 ps up to ∼0.5% LSMO strain (black line); at ∼1% it reaches a maximum and then even starts to drop again. This non-monotonic dependence can again be understood by the two approaching envelope functions which maximally overlap at an LSMO strain of ∼1% where they provide the highest intensity for the ZOP. For higher strain, the ZOP intensity decreases as the envelope maxima separate again. The experimental data at higher pump fluence in figure 4(a) are indeed indicative of this behavior since we observe a clear plateau around 1.6 ps meaning that the turning point has been reached. Once more, the LCDX satisfactorily simulates the data, although the effects are overestimated and thus have to be scaled down to coincide with the experimental data. The reason for this will be discussed below. The inset in figure 4(a) shows that the EM (black line) qualitatively approximates the LCDX (red line). In the case of other SL peaks, figure 2(b) reveals that the EM yields a crude underestimation of the peak intensities for a homogeneous LSMO strain of 1.15%. We exemplify the underlying mechanism by investigating the +1 SL peak of the LSMO/STO SL at q (+1) = q (0) + g SL in more detail. Figure 2(b) as well as the inset of figure 4(b) demonstrate that even though both single-layer envelope functions predict a considerable intensity at 1.15% LSMO strain, the DL envelope vanishes. This is caused by the destructive interference of the x-ray waves diffracted from one LSMO and the adjacent STO layer. The experimental data in figure 4(b) indeed show that for high excitation fluence the signal minimum of the transient around 1.6 ps splits up, verifying the destructive interference and the implied non-monotonic dependence on strain. The LCDX (solid lines in figure 4(b)) predicts the relative intensity decrease to be 50% larger compared to what we measured, most likely because the XRD simulations assume a perfect crystal lattice without any kind of disorder or interdiffusion. The simpler EM even predicts a perfect destructive interference of the x-rays which is much less pronounced in the LCDX calculations since the true strain pattern is taken into account. Thus, it is not surprising that the LCDX still overestimates the effect of the interference. A similar reason holds for the ZOP. In conclusion, we have presented predictions of combined model calculations simulating the transient strain field dynamics of photoexcited metal/insulator SLs and the induced transient XRD response. We compare these predictions to various UXRD data taken on SRO/STO and LSMO/STO SLs and find excellent agreement for both linear and nonlinear x-ray response to the induced strain. In particular, we have theoretically predicted and experimentally observed a peculiar destructive interference of x-ray waves in an LSMO/STO SL and a highly nonlinear response in an SRO/STO SL. The observations are interpreted by means of a simpler EM connecting the overall x-ray response to the structural dynamics of the individual layers. The EM correctly covers all transient features and often allows quantitative estimations. For precise simulations, the LCDX has to be evaluated. The presented findings emphasize that UXRD experiments can be accurately interpreted to reveal the transient structural dynamics of epitaxial crystals on subpicosecond time scales. They will open paths for simulation-based design of future ultrafast x-ray devices exploiting such nonlinear or interference phenomena that can be tailored into the nanostructures.
3,197.6
2012-01-01T00:00:00.000
[ "Physics" ]
Exceptional points enhance sensing in silicon micromechanical resonators Exceptional points (EPs) have recently emerged as a new method for engineering the response of open physical systems, that is, systems that interact with the environment. The systems at the EPs exhibit a strong response to a small perturbation. Here, we show a method by which the sensitivity of silicon resonant sensors can be enhanced when operated at EPs. In our experiments, we use a pair of mechanically coupled silicon micromechanical resonators constituting a parity–time (PT)-symmetric dimer. Small perturbations introduced on the mechanically coupled spring cause the frequency to split from the EPs into the PT-symmetric regime without broadening the two spectrum linewidths, and this frequency splitting scales with the square root of the perturbation strength. The overall signal-to-noise ratio is still greatly enhanced, although the measured noise spectral density of the EP sensing scheme has a slight increase comparable to the traditional counterpart. Our results pave the way for resonant sensors with ultrahigh sensitivity. Introduction The concept of microelectromechanical system (MEMS) resonators that mechanically vibrate at resonance has a long history of research dating back to the 1960s 1 .The resonator is often utilized for resonant sensors that generate significant development and commercial applications associated with charge, mass, displacement, acceleration, and magnetic sensing 2 .Parameters of interest to be sensed, i.e., small perturbations, induce the effective stiffness change or mass change of the resonator, leading to its resonant frequency shift or vibrating amplitude variation.Traditionally, the resonant sensor in the form of a frequency shift as an output signal has a quasidigital nature.As a result, it is basically independent of analog levels and minimizes the inaccuracies that arise in an analog output as well as its converted digital format 3,4 .However, the frequency shift is proportional only to small perturbations, leading to low sensitivity.By biasing the resonant sensor in a nonlinear state or in high-order frequency mode, the enhanced sensitivity has been explored 5,6 .Based on the mode localization effect of weakly coupled resonators, a resonant sensor in the form of an amplitude ratio as an output signal has been extensively developed [7][8][9] .For the often used mode-localized resonators shown in Fig. 1a, where two identical resonators of proof mass m, mechanical spring constant k, and loss strength γ are weakly coupled through a mechanical spring constant k c , a perturbation induces the effective stiffness change Δk or mass change Δm for one of the two resonators, resulting in amplitude variations.This class of sensors offers high sensitivity, but monitoring the voltage or current amplitude for the analog output is challenging at the same level of precision as in tracking the frequency shift. The development of a resonant sensor that offers high sensitivity while maintaining high precision is fundamentally needed.Here, we propose a PT-symmetric scheme in which an equivalent amount of gain, controlled actively by a closed-loop feedback circuit, is incorporated into one resonator that serves as a PTreversed counterpart to the other resonator with loss (Fig. 1b).We theoretically propose and experimentally demonstrate that the frequency splitting of PT-symmetric resonators when operated at EPs scales with the square root of the perturbation strength, in contrast to the linear frequency shift of the traditional scheme (Fig. 1c). The PT-symmetry concept originated in the context of quantum mechanics 10,11 and has been extensively explored in classic wave systems, such as optics and photonics [12][13][14][15] , acoustics 16,17 , mechanics 18,19 , and electronics 20,21 .PT-symmetric systems have two distinguished phases, an exact PT-symmetric phase with real eigenvalues and a broken PT-symmetric phase with complex-conjugate eigenvalues.EPs where both eigenvalues and eigenvectors coalesce separate the exact phase from the broken phase.Systems at the EPs exhibit strong responses to a small perturbations.Therefore, EP-based sensors have recently received significant attention [22][23][24][25] , although there is an ongoing debate about their fundamental limits [26][27][28] .In the case of PT-symmetric inductorcapacitor (LC) resonators, classic noises are more relevant than quantum noises.The enhanced sensitivity of PTsymmetric LC sensors has been experimentally demonstrated by biasing them at the exact phase 29,30 , EPs 31,32 , and broken phase 33 .Moreover, through optimizing the design of low pass circuits, thermal noises have been alleviated to an identical level as that achieved by the corresponding traditional sensing scheme 34 .Inspired by these works, in this paper, we explore the consequence of PT-symmetric silicon micromechanical resonators for EPenhanced sensing. Principle of EPs-enhanced sensitivity To describe how EPs enhance sensing in silicon micromechanical resonators, we present an analysis based on a PT-symmetric dimer 11 consisting of two identical resonators of mass m, spring constant k, and resonance frequency ω 0 ¼ ffiffiffiffiffiffiffiffiffi k=m p , as shown in Fig. 1b.In the PTsymmetric dimer, one resonator has a loss γ ¼ c= ffiffiffiffiffiffi ffi mk p , and the other has a gain g ¼ c g = ffiffiffiffiffiffi ffi mk p , with c and c g representing the damping coefficients of the loss and gain resonators, respectively.The resonators are coupled together with the coupling strength μ ¼ k c =k, where k c is the coupling spring constant.The system is described by where ω is the frequency scaled by ω 0 , the subscript 1 (or 2) refers to the gain (or loss) resonator, and x 1;2 are the eigenstates describing displacements.In the case of weak coupling, Eq. ( 1) may be cast into the following coupledmode model (see Methods): To find the eigenfrequencies, taking x 1;2 / e iωt , we obtain the characteristic equation Given a delicate balance between gain and loss, g ¼ γ, the eigenfrequencies and the corresponding eigenstates , where φ is the phase difference between resonator 1 and resonator 2. Note that the eigenfrequencies depend upon the coupling strength μ relative to the gain/loss parameter γ.In the exact phase μ>γ, the coupling between the gain and loss resonators is sufficiently strong.The eigenfrequencies are real, which is characterized by equal magnitudes for the superposition oscillations on the gain and loss sides.In the broken phase μ<γ, the coupling is too weak for the system to remain in equilibrium, and the eigenfrequencies become complex with a single real frequency and conjugate imaginary parts, which indicates that it grows exponentially in one mode and decays exponentially in the other.When μ ¼ γ ¼ μ EP , the eigenfrequencies are merged into ω EP ¼ 1 þ μ=2, i.e., EPs at which the eigenfrequencies and the corresponding eigenstates coalesce.Figure 2a shows the evolution of the real and imaginary parts of the eigenfrequencies with coupling strength μ.When the coupling spring is subjected to an external perturbation, the coupling spring constant k c is altered to k c þ 4k c , corresponding to the coupling strength ð1 þ δÞμ, where δ ¼ 4k c =k c .Solving Eq. ( 2)∼(3) under the perturbation yields the frequency splitting near EPs, 4ω EP ¼ μ ffiffiffiffiffi 2δ p , and its sensitivity to perturbation, . Physically, an external perturbation pushes the system away from the EP and consequently lifts the non-Hermitian degeneracy of the eigenfrequencies and the corresponding eigenstates, triggering frequency splitting 10,11 .In our scheme, the perturbation δ is positive because of the electrostatic force, which is always attractive.Therefore, the eigenfrequency and its splitting are real during operation.The perturbation in the previous EP sensing scheme causes the systems to break, giving rise to complex frequencies [22][23][24][25] .The presence of the imaginary part of the eigenfrequencies leads to broadening and further overlapping of the two adjacent spectra and sets a fundamental resolution limit on the sensitivity 27 .In fact, the perturbation in our scheme drives PT-symmetric resonators to move from the EP into PTsymmetric regimes.This indicates that silicon micromechanical resonators that operate at EPs can be exploited for enhanced sensing using frequency splitting as a measure, as shown in Fig. 2c.In contrast, traditional resonators utilize a diabolic point (DP) at which the eigenfrequencies, but not the eigenstates, coalesce, as described in the Methods section.The traditional resonators become trivially degenerate when uncoupled from each other, μ ¼ 0. When coupled and subject to the same perturbation δ, the resulting frequency splitting is proportional to the perturbation strength, 4ω DP ¼ μδ, as shown in Fig. 2b.Hence, for sufficiently small perturbations, the splitting at the EP is larger than that at the DP.We use finite-element simulations to validate the above results, as provided in the Supplementary Material Fig. 1c, d (dots) show the frequency splitting and its sensitivity as a function of the perturbation for the EP and DP resonators, respectively.Figure 2a (dots) shows the real and imaginary parts of the eigenfrequencies as a function of the coupling strength.These results confirm the coupled-mode model. Experiments of EP-enhanced sensitivity We demonstrate the theory presented above using a pair of mechanically coupled silicon micromechanical resonators.A scanning electron micrograph (SEM) image of the structure is shown in Fig. 3.Both resonators consist of double-ended tuning forks (DETFs).Previously, a pair of electrically coupled nearly identical DETFs were used to demonstrate the mode localization effect 7,8 .Our work differs in that we aim to demonstrate a scheme of EPenhanced sensitivity.The gain resonator is regulated actively by external proportional feedback control.During external feedback, the resonator motion is transduced into a capacitance variation, leading to a current variation that is then filtered, phase shifted, and finally applied to drive the resonator 35 .Here, the DETF on the left is configured as a gain resonator, with the feedback circuit connected to its sense electrode.The DETF on the right is designed as a loss resonator, and the readout circuit is connected to its sense electrode (Supplementary Fig. 5a). Gain is finely tuned by adjusting the amplitude of the feedback force that is in phase with the mechanical velocity so that a delicate balance between gain and loss can be achieved.As shown in Fig. 3, the two DETFs are weakly coupled by a serpentine flexure beam connected to their ends.The equivalent spring stiffness of the flexure beam can be electrostatically adjusted 36 .The counter electrode and flexure beam are directly designed to be opposite to each other with a gap of 3 μm.To demonstrate the physical phenomenon of EP-enhanced sensing in resonators, the voltage applied across them can precisely adjust the equivalent spring stiffness of the flexure beam to generate small perturbations (Supplementary Fig. 3c). Through simulations, we fabricated a pair of mechanically coupled nearly identical DETF resonators, as shown in Fig. 3.The gain resonator was driven and sensed using parallel-plate capacitive transduction, while the loss resonator with parallel-plate capacitive transduction was constructed only for measurement.The fabricated resonators were tested under a vacuum (≈1.65 Torr) in a custom vacuum chamber.The frequency response was recorded using a lock-in amplifier connected to the loss resonator.Both the alternating current (AC) driving signal and feedback control signal were simultaneously applied to the gain resonator.A quality factor of approximately 350 was measured, and the corresponding lossγ and damping coefficient c were estimated as 0.00285 and 4.77 × 10 −6 N•s/m, respectively.Due to manufacturing process tolerances, there is a deviation between the initial frequencies of the two resonators.By adjusting the feedback amplitude, the coupled resonators were brought closer to the EPs, at which the resonance frequency was measured to be approximately 302.36 kHz.The perturbation was then applied by regulating the direct current (DC) voltage across the flexure beam and its counter electrode. Figure 4a shows the frequency response of the PTsymmetric resonator biased initially near the EPs as a function of perturbation.Figure 4b shows the dependence of the extracted frequency splitting on the perturbation strength.For comparison, the frequency response of the resonator operated at the DP was also collected by moving external feedback away.Overall, the frequency splitting of the EP resonator is larger than that of the DP resonator subject to the same small perturbations, as expected.For δ ¼ 4%, as shown in Fig. 4b, an enhancement of approximately 5 times is experimentally observed.This shows that the experimental results align well with the theoretical expectations and simulations.Moreover, the sensitivity can be enhanced by an order of magnitude compared to that of the DP resonator for sufficiently small perturbations.The inset in Fig. 4b displays a logarithmic plot of the dependence of Δω EP and Δω DP on δ.The DP resonator exhibits a slope of 1, whereas the EP resonator exhibits a slope of 1/2, confirming the square-root topology of EPs. Although our PT-symmetric resonators with loss and gain elements have high sensitivity to small perturbations when biased at EPs, the loss and gain elements are prone to adding noise to the system.This issue has raised an ongoing debate concerning the effectiveness of EP sensing schemes [26][27][28] .There is technical noise and fundamental noise in PT-symmetric systems.Technical noise refers to thermal noise and electronic noise.Fundamental noise refers to the excess noise caused by the eigenbasis collapse in non-Hermitian systems.In the classic system, technical noise is more common.The total root mean square (RMS) noise voltage, v PT , can be expressed as a sum of various terms associated with different noise sources that might affect the precision of the measurements: where v t , v f , and v DC are the thermal RMS noise voltage of mechanical resonators, the electronic RMS noise voltage of gain resonators due to external feedback circuits, and the electronic RMS noise voltage due to bias voltage sources, respectively.Typically, v DC is dominated by the other terms in the equation.To characterize the noise level of micromechanical resonators, the noise power spectral density (PSD) was measured for traditional and PT-symmetric schemes.During the measurements, the driving signal was turned off, and a noise analyzer (Zurich Instruments HF2LI) was used to measure the noise PSD around the EPs at the readout channel.As shown in Fig. 5, the average values of the noise voltage spectral density for the traditional and PT-symmetric schemes are 0.69 × 10 For comparison, the results of DP resonators are shown in brown.The inset displays a logarithmic plot of the dependence of the frequency splitting on δ.The EP resonator exhibits a slope of 1/2 whereas the DP resonator exhibits a slope of 1 measured noise spectral density of the EP sensing scheme is slightly greater than that of the traditional scheme.This shows that the noise voltage of the gain resonators due to external feedback circuits is dominant.Noise limits the minimum signal that the sensors can detect; however, v t and v f do not experience strong variations around EPs, while the sensitivity is enhanced.As a result, the overall signal-to-noise ratio is still greatly enhanced, which is desirable for various sensors 2 . Higher sensitivity can potentially be achieved by reducing the noise of the external proportional feedback control circuits, which is currently dominated by circuit parasitics.Detuning of coupled DETF resonators due to process tolerance induces a baseline bifurcation that limits the smallest Δω that can be detected.This corresponds to zero outputs in general sensors. Resonators for resonant sensors are usually made to be as lossless as possible to exhibit high quality factors 9 , or the effective quality factor of the resonators is further improved by external proportional feedback control 35 .For our demonstration, we have utilized the same configurations, including implementing the same closed-loop feedback design as those in mode-localized resonators 8,9 .In principle, hence, the EP resonator presented here does not bring any additional noise relative to the modelocalized resonators.However, shifts in amplitude may not be as accurately measured as those in frequency.EPs also exist in coupled resonators with unbalanced gain and loss 14,15 .Such unbalanced systems could be exploited to enhance the sensitivity of high-loss resonators.Previous EP-based sensors in which the perturbation is exerted on one of the coupled resonators cause PT symmetry to break during operation, leading to complex frequency splitting 25 .The perturbation in our scheme is exerted on the coupling spring, which is symmetric about the two coupled resonators, leading to real frequency splitting.However, the proportional coefficient in the symmetric perturbation is not as large as that in the asymmetric perturbation 32 .The application of EP-based silicon micromechanical resonators as well as their noise properties remains an important direction for future work. Conclusions In summary, we present both theoretical and experimental studies of a PT-symmetric micromechanical resonator.We show that the frequency splitting induced by a perturbation at an exceptional point has a squareroot dependence on the perturbation strength, in contrast to the linear dependence in traditional resonators, leading to enhanced sensitivity for small perturbations.Simulations and measurements from a pair of mechanically coupled micromechanical resonators support the theoretical predictions.By replacing the perturbation with acceleration or magnetic signals, our scheme may find applications in accelerometers and magnetometers. Coupled-mode equations for micromechanical resonators Applying Newton's law to coupled micromechanical resonators in Fig. 1b yields the following equations: where m, k, and c are the mass, spring constant, and damping coefficient of a single resonator, respectively, and k c is the coupling spring constant.x n (n = 1,2) denotes the vibration displacements of the two resonators.Taking x n t ð Þ !x n e iωt , we rewrite Eq. ( 6) as where μ ¼ k c =k is the coupling strength,γ ¼ c= ffiffiffiffiffiffi ffi mk p is the loss strength, g ¼ c= ffiffiffiffiffiffi ffi mk p is the gain strength, and ω is scaled by ω 0 ¼ ffiffiffiffiffiffiffiffiffi k=m p . If the coupling is weak, i.e., μ ( 1, μ and γ can be taken as the same order, and we make the following approximations: ω 2 % 2ω À 1, ωγ % γ, and ωg % g.Equation ( 7) is then reduced to Equation ( 8) is equivalent to the coupled-mode equations in Eq. ( 2) with time-harmonic displacement x n t ð Þ !x n e iωt .Solving Eq. ( 8) yields When under a delicate balance between gain and loss g ¼ γ, i.e., the PT-symmetric dimer, the eigenfrequencies and the corresponding eigenstates are defined by Equation ( 4). For the PT-symmetric dimer, the frequency splitting near EPs is described by When the coupling spring is subjected to an external perturbation, the coupling spring constant k c is altered to k c þ 4k c , corresponding to the coupling strength ð1 þ δÞμ, where δ ¼ 4k c =k c .The frequency splitting at EPs (μ ¼ γ) due to the perturbation is given by The sensitivity of the frequency splitting to perturbation is given by Taking g ¼ γ ¼ 0 in Eq. ( 8), we obtain the Hermitian Hamiltonians The characteristic equation is then expressed as Solving the characteristic equation yields For the Hermitian system, the frequency splitting near DPs is described by The DPs appear when μ ¼ 0. Hence, the traditional resonators become trivially degenerate when uncoupled from each other, μ ¼ 0. Under coupling and subject to the perturbation δ, the resulting frequency splitting at the DPs is given by Fabrication of silicon micromechanical resonators Silicon micromechanical resonators were fabricated using n-type (100) silicon-on-insulator (SOI) wafers.The process flow is presented in Supplementary Fig. 4.Each of the tines in the tuning-fork resonators was designed to be 20 μm thick, 300 μm long, and 8 μm wide, with a gap of 6 μm between the tines.The drive and coupling gaps were designed to be 3 μm wide. Gain resonators A gain resonator was achieved by applying a feedback force proportional to its velocity _ x.This force is expressed as Under the feedback force, the dynamic equation is given by m€ Note that due to the presence of the feedback force, the effective damping coefficient of the resonator is modified into Therefore, the damping coefficient can be adjusted by regulating the feedback force (Supplementary).In the vibration equation, a positive damping coefficient represents a loss, and a negative damping coefficient represents a gain. A perturbation approach Weak coupling between two silicon micromechanical resonators was achieved by using a flexure beam.The flexure beam was designed with a long beam 50 μm long and 5 μm wide and a short beam 25 μm long and 5 μm wide.Two DETFs were weakly coupled by a flexure beam connected to their ends.The initial coupling coefficient k c /k was 0.00285.The counter electrode was designed in a gap of 3 μm relative to the flexure beam.The equivalent spring stiffness k eff of the flexure beam can be electrostatically adjusted.k eff is expressed as where k c and k e ¼ 4k c are the mechanical-spring stiffness and the electrical spring stiffness, respectively, and a perturbation δ ¼ 4k c =k c .By changing the voltage across the counter electrode and the flexure beam, the perturbation can be adjusted (Supplementary Fig. 3c). Measurement setup Micromechanical resonators were placed in a customized vacuum chamber.The gain resonator was controlled and sensed using parallel-plate capacitive transduction.In the feedback circuit of the gain resonator, a transimpedance amplifier (TIA) (OPA656) was used to convert the motion signal of the resonator into an electrical signal.A bandpass filter (BPF) was used to prevent the possible occurrence of unwanted oscillator modes.The voltage control amplifier (VCA810) was used for the voltage amplitude control, and the subsequent electrical signal flowed to the phase modulation, enabling the phase to be consistent with the movement velocity of the resonator.The final output electrical signal was used as the driving signal of the resonator together with the AC source of 20 mVpp.The gain resonator was biased by a DC voltage of 25 V.The feedback circuit was powered by +5/−5 V DC voltage.The loss resonator with parallelplate capacitive transduction was used only for measurement.The frequency response was recorded using a lockin amplifier (HF2LI, Zurich Instruments) connected to the loss resonator.The gain resonator and the loss resonator were both connected to GND.A photograph of the experimental setup is provided as Supplementary Fig. 5b. Fig. 1 Fig. 1 Principle of the frequency monitoring of two coupled micromechanical resonators.The two resonators with identical mass m and identical stiffness k are coupled with the coupling strength μ ¼ k c =k where k c is the coupling spring constant.a Traditional scheme.Two coupled resonators with the same loss γ ¼ c= ffiffiffiffiffiffi mk p where c is the damping coefficient.b PT-symmetric dimer.One resonator with loss γ and the other resonator with an equivalent amount of gain g. c Comparison of the frequency splitting Δω of the two coupled resonators operated at the diabolic point (DP) and the exceptional point (EP) when the coupling spring is subject to an external perturbation δ ¼ Δk c =k c .The response Δω / δ 1=2 for the EP resonators whereas Δω / δ for the DP resonators.d Comparison of the sensitivity of frequency splitting to perturbation.For small perturbations, the sensitivity of the EP resonators is enhanced by an order of magnitude with respect to that of the DP resonators.In computation, the gain/loss coefficient g ¼ γ ¼ 0:01 and the initial coupling coefficient μ ¼ 0:01 are set for the EP resonators.The line and dots in (c) and (d) indicate the theoretical and simulated results, respectively Fig. 2 Fig. 3 Fig. 2 Sensitivity enhancement of silicon micromechanical resonators biased at EP. a The real and imaginary parts of the eigenfrequencies for the EP resonators are shown as a function of the normalized coupling coefficient μ=μ EP with g ¼ γ ¼ 0:01.The line and dots denote the theoretical and simulated results, respectively.bThe real frequency evolutions of the DP resonators when varying the coupling coefficient μ and the loss γ.The operation of the DP resonators is usually required to be lossless, and, hence, its frequency is independent of loss.c The real frequency evolutions of the EP resonators when varying the coupling coefficient μ and gain g with γ ¼ 0:01.Even if g≠γ, the real parts of the complex frequency also respond to the coupling.The magnitude of the response of the resonators is defined by the frequency splitting Δω Fig. 4 Fig. 4 Measurements of PT-symmetric resonators operated near EPs. a The frequency response of the EP resonator as a function of perturbation δ = 0.9%, 2%, and 3%, respectively.b The real parts of the frequency splitting as a function of perturbation δ around EP (red).The dotted lines indicate the fitted square-root behavior, the filled diamond indicate experimental data, and the error bars indicate the uncertainty in frequency measurements due to the external circuits.For comparison, the results of DP resonators are shown in brown.The inset displays a logarithmic plot of the dependence of the frequency splitting on δ.The EP resonator exhibits a slope of 1/2 whereas the DP resonator exhibits a slope of 1 Fig. 5 Fig. 5 Noise spectral density extracted from the readout channel.Note that the peak in the figure corresponds to 50 Hz power source frequency
5,720.4
2024-01-19T00:00:00.000
[ "Physics", "Engineering" ]
Geometry and flow influences on jet mixing in a cylindrical duct To examine the mjxing characteristics of jets in an axisymmetric can geometry, temperature measurements were obtained downstream of a row of cold jets injected into a heated cross stream. Parametric, nonreacting experiments were conducted to determine the influence of geometry and flow variations on mixing patterns in a cylindrical configuration. Re.~ults show that jct-to-mainstream momentum-flux ratio and orifice geometry significantly impact the mixing characteristics of jets in a can geometry. For a lixed number of orifices, the coupling between momentum-flux ratio and injector geometry determines I) the degree of jet penetration at the injection plane and 2) the extent of circumferential mixing downstream of the injection plane. The results also show that, at a fixed momentum-flux ratio, jet penetration decreases with I) an increase in slanted slot aspect ratio and 2) an increase in the angle of the slots with respect to the mrunstream direction. Introduction M IXING of jets in a confined crossflow has a variety of practical applications and has motivated a number of studies over the past decades.In a gas turbine combustor, e.g., mixing of relatively cold air jets is important in the dilution zone where the products of combustion are mixed with air to reduce the temperatures to levels acceptable for the turbine blade materia l.Mixing of jets in a crossflow is also import ant in applications such as discharge of effluents in water, and in transition from hover to cruise of V/STOL aircraft. To meet the air quali ty standards affecting gas turbines, low emissions combustors are being developed.' One of the Presented as Pape r 92-0773 at the AIAA 30lh Aerospace Sciences Meeting and Exhibit.Reno, NV, Jan. 6-9, 1992; received March 18, 1993; revision received Ju ne 2, 1994; accepted for publication July 14, 1994.Copyright© 1992 by the American Institute of Aeronautics and Astronau1ics, Inc.No copyright is asserted in 1he United States under Title 17, U.S. Code.The U.S. Governmenl bas a roya lty-free license to exercise all rights under the copyrigh1 claimed here in for Govern menta l purposes.All promising low NOx combustor concepts is the rich-burn/quickmix/lean-burn (RQL) combustor. 2The RQL developmental effort poses new challenges in jet mixing in a confined crossflow.1.~More specifically, the range of jet-to-mainstream mass flow ratios encountered in the quick-mix region of a RQL combustor differ significantly from those of a conventional combustor dilution zone.J--' Most of the previous research of jets in a crossflow has been performed in rectangular geometries.Examples of these studies are provided in Table 1 and are summarized elsewhere.6The influence of orifice geometry and spacing, jet-to-mainstream momentum-flux ratio J, and density ratio have been documented for singleand double-sided injection (e.g., Ref. 6).These studies have identified J and orifice spacing as the most significant parameters influencing the mixing pattern. Experiment A series of parametric experiments were conducted in this study to determine the influence ofJ and orifice configuration on mixing of jets in a can geometry.The parametric experiments investigated a range of J values, including 25, 52, and 80.A jet-to-mainstream mass ratio of 2.2 was maintained at each tested J value.An area discharge coefficient of 0.80 was assumed in designing the orifices. The modules were 6.5 in.(165 mm) long, with the center of the o rifice row placed at one radius from the edge.The orifice area for each module at the design J value was kept constant.As a result, the dimensions of a given orifice varied as a function of J.A representative module is shown in Fig. and slot aspect ratio as well.For reference, tlie axial location of the trailing edge and blockage are presented in Table 2.The former is expressed as the ratio of the axial projection of the orifice lo the radius of the mixing module, and the latter is defined as the ratio of the circumferential projection of the orifice to the spacing between orifice centers.Mixing was examined by measuring the local mean temperature throughout the module.The mainstream flow entering the module was heated to the highest temperature (212°F) compatible with the upper temperature limits of Plexiglas.Jets were introduced at room temperature.The operating conditions are presented in Table 3. Reference velocity.defined as the velocity at the inlet to the mixing section and calculated based on the mainstream temperature and pressure , was 34.5 fps (10.5 mis).The actual discharge coefficient.and momentum-flux ratio for each case was determined by measuring the jet pressure drop. A 12-in.-long, 0.125-in.type K thermocouple was used to measure the temperatures.Temperature was measured at 50 points in a quarter sector of the modules, for five planes downstream of the orifices.Figures 2a and 2b show the measurement points and the axial planes.A 90-deg sector was • . - Experimental Facility The test faci lity that is located at the UCI Combustion Laboratory and is shown schematically in Fig. 3 air that was filtered a nd regulated before branching into two isolated main and jet circuits.The jet circuit incorporated four independently metered flow legs.The main circuit consisted of a coarse and a fine leg that provided a total of 150 standard cubic feet per second (SCFM) for the mainstream flow .Each leg was regulated independently to eliminate the effects of pressure fluctuations.All circuits were metered by sonic ven turies.The mainstream air was heated to 212°F by a 20-kW air preheater (Watlow, PIN 86036-2).The outlet temperature was controlled by a Watlow heater controller (series 800).The mainstream air.after being metered and heated, passed through fl exible tubing into a 2-in.insulated carbon steel pipe immediately upstream of the mixing module.A combination honeycomb/screen in the pipe provided uniform flow to the mixing module.The flexible tubing upstream of the pipe allowed manual traversing of the experiment in the X, Y, and Z directions.A Mitutoya model PM-331 digital t raverse readout was used to read the coordinates. The 3-in.mixing module used in the parametric phase was positioned inside a concentric Pyrex® manifold (see Fig. 3).The jet manifold incorporated four openings o n top and four on the bottom, each 90 deg apart.Four discrete jets were supplied to the manifold through the bottom openings.Two of the openings on the top were used to measure the manifold temperature and pressure, and the other two were blocked.Each jct circuit was metered individually and installed to provide symmetric flow conditions at the inlet to the manifold.Honeycomb was installed in the jet plenum upstream of the orifices to provide uniform flow through the mixing module. Analysis To compare the mixing characteristics of different modules, the temperature measurements were normalized by defining the mixture fraction fat each point in the plane: (1) A value off = 1.0 con-esponds to the mainstream temperature.whereas f = 0 indicates the presence of the pure jet flow.Complete mixing occurs when f approaches the equilibrium value that is nearly equal to the ratio of the upstream flow to the total flow.Note thatf = J -8, where 8 appears in previous studies. 6 To quantify the mixing effectiveness of each module configuration , an area-weighted standard deviation parameter ("mixture uniformity") was defined at each z/R plane: where A = ~a; , f; is the mixture fraction calculated for each node, andfc .. uH is the equilibrium mixture fraction , defined as (3) Complete mixing is achieved when the mixture uniformity parameter across a given plane reaches zero. Results and Discussion This section presents the mixing characteristics for the baseline geometry (module 1), and the 8:1 and 4:1 slanted slots configurations (module 2 and module 5) as a function of momentum-flux ratio.Jn addition, the effects of slot aspect ratio and o rientation on mixing pattern are discussed.From an overall-mixing standpoint, an optimum mixer is defined as one that produces a uniformly mixed flowfield, without a persistent unmixed core or unmixed circumferential regions by the z/R = 1.0 plane.In the contour plots presented, the center of the jets are located at 22.5 and 67.5 deg, relative to the measurement plane.For slanted slots, the jets angle counterclockwise as one moves upstream. 0.40 -0.50 -0.30 -0.40 -0.20 -0.30 -0.10 -0.20 below 0.10 -0.40 -0.50 -0.30 -0.40 -0.20 -0.30 -0.10 -0.20 below 0.10 f ig. 5 Mixture fract ion, J80MO DI, baseline eight-hole, J = 84.2. At the jet in1ection locations for J = 25 (J25MOD1) , f decreases monotonically in the radial direction , with the highest concentration of mainstream fluid on the duct centerline (R = 0.0), and lowest at the walls (R = 1.5).The monotonic variation off indicates that no backflow exists for this configuration.The radial variation off at z/R = 0.0 for J = 80 (J80MOD1), on the other hand, is nonmonotonic.For the J = 80 module at the injection location, f is relatively low at R = 0.0, increases as R is increased, and approaches zero at the jet inlet.This non monotonic variation off indicates backnow and overpenetration of jets for these configurations. Overpenetration of jets is evident at the downstream axial locations for J = 80 (/80MOD1) by the high f near the wall. At z/R 1.0.the J = 80 module (J80MOD1 ) shows low f values at the center, and an unmixed region along the wall. whereas J "" 25 (J25MOD l) shows a more uniformly mixed Oowfield.The degradation in mixing for J = 80 (J 80MOD1), occurs because the increased jet penetration to the module center directs a larger portion of the jet flow to the core, thus decreasing the circumferential mixing along t he walls.In an axis-sym metric can geometry, where the majority of the mass is concentrated along the walls, good circumfe rential mixing is important in obtaining a well-mixed flowfield.Therefore, according to the definition presented earlier, the round holes at J = 25 (J25MOD1) display closer to optimum mixing than the J "' 80 case at zl R = 1.0.Following che methodology of Eq. ( 6) fou nd in Ho lde ma n / ' the optimum mo mentum-flux ratio for this eight-orifice case would be just over 20. Figure 6 compares the mixture unifo rmity parameter for all of the baseline modules tested as a function of mome ntum flux ratio.This plot confirms the qualitative observation that the increase in the mo me ntum-flux ratio improves mixing at the initial planes, but degrades the overall mixing downstream of the injection plane.The first axial location (z/R = 0.0) examined for the J = 25 module shows a large region at f > 0.9, indicating very small or no jet penetration to the center.For this configu-ra1ion.the relatively unmixed core persists with increasing z/R , and is present at the last axial location of z/R = 1.0. This configuration represents an underpenetrated case. The first indication of jet penetration to the center for the three 8: J aspect ratio modules tested is observed at the z/R = 0.0 plane of the J = 80 (J80MOD2) module.The mixcure fraction value ar the core of this plane ranges between 0.8-0.9 indicating that a portion of jet fluid is mixed with the mainstream .At the z/R = 1.0 plane, the main portion of the flow is close to the equilibrium value , while a slightly larger f is seen at the center.The presence of the slightly warmer core shows that this conriguration is still slightly unde rpenetrated.Mixing characteristics of this module a re similar to those at J =-25 (J25MOD1). -0.40 -0.50 -0.30 -0.40 -0.20 -0.30 -0.10 -0.20 below 0.10 and J80MOD5).The first axial location for the J = 25 module (at z!R = 0.0) shows a relatively large central region with mixture fraction values in the range of 0.8-0.9.This f value is less than unity, indicating slight jet penetration and mixing at the center of the module.Compared to the round hole jets (J25MOD1).the region of near unity val ues off is larger.The jet penetration for the round hole jets is stronger at this J value, therefore, the high mixture fraction region is smaller.As described previously, the 8:1 aspect ratio module at J = 25 (J25MOD?.) represents a case of underpenetration with central/ values above 0.9.At downstream locations.the J = 25 module (J25MOD5) produces a re la ti vely well-mixed flowfield with no indication ofunmixed walls.At z!R = 1.0, however, a slightly unmixed core is observed. As J is increased, the penetration to the center is enhanced and the mixture fraction values at tbe core of the module at initial axial locations decrease.At J = 80 (J80MOD5) , a relatively low f value region is seen at the first axial location. At downstream locations, a cool center and relatively unmixed regions a long the walls a re produced.At this momentum-flux ratio as well as at J = 52 (J52MOD5), the jets overpenetrate, a condition that is not desirable from an overall mixing standpoint. Figure 12 compares the mixture uniformity parameter for the three 4: l aspect ra tio geometries.T he trend is very similar to that described for the baseline modules.At initial planes, t he higher the momentum-flux ratio, the better t he mixture uniformity.At downstream locations.the J value with the most initial overpenetration (80). is the poorer mixer due to degradation of circumferential mixing (J80MOD5).The slot aspect ratio affects 1) the amount of jet mass injected per unit length and 2) the axial extent over which the mass is injected. For a given momentum-flux ratio and number of orifices, the smaller aspect ratio slots penetrate further into the cross stream.The larger aspect ratio slots on the other hand, produce a stronger swirl component that enhances the circumferential mixing.Figure 13 compares the mixture u niformity parameter for the 8:1 and 4:1 aspect ralio slots.At the lower and intermediate J values. the 4:1 aspect ratio geometry is a better mixer at all ax ial locations.At the highest J value tested, however, the 8: 1 aspect ratio behaves as the better mixing geometry beyond z!R = 0.5.This is because of the overpenetration of jets at J == 80 (J80MOD5) , which improves mixing at the initial pla nes.but produces unmixed regions a long the walls at downstream axial locations.As the slot angle is changed, the J value at which one mixer demonstrates more desirable mixing characteristics than the other can change also. The slot angle affects 1) the axial length over which jet ma ss is injected and 2) the " blockage" that the jets present to the above 0.90 -0.80 -0.90 CJ 0.70 -0.80 D 0.60 -0.70 D 0.50 -0.60 -0.40 -0.50 -0.30 -0.40 -0.20 -0.30 -0.10 -0.20 below 0. 10 Examining the flowfield at the first axial location for these mod ules shows that by increasing the slot angle, the jet pen- T he jet penetration at the initial axial location, although differen t for each module , results in similar val ues of the mixture un iformity parameter as shown in Fig. 18.The 0-deg slots (J52MOD3) produce the most jet penetration and display the worst mixture uniformity parameter a l z/R = 1.0.above 0.90 -0.80 -0.90 CJ 0.70 -0.80 D 0.60 -0.70 CJ 0.50 -0.60 -0.40 -0.50 -0.30 -0.40 -0.20 -0.30 -0.10 -0.20 below 0.10 The optimumlnixer based on these four cases appears to exist at an angle between 45 and 67.5 deg.These results suggest that slot angle does not have a big impact on mixtu re uniformity.However, this observation cannot be extended to cases where the aspect ratio, number of orifices, and momentum-flux ratio are allowed to vary along with slot angle.Conclusions 1) Jet-to-mainstream momentum-flux ratio J , and orifice geometry significantly impact the mixing characteristics of jets in a cylindrical geometry. 2) For a fixed number of orifices, the coupling between J and orifice geometry determi•nes the extent of penetration and circumferential mixing in a can configuration. 3) From an overall-mixing standpoint, moderate penetration to the center is desi rable.Underpenetration forms a relatively unmixed core that persists at downstream locations.Overp' enetration degrades circumferential mixing and forms unmixed regions along the walls. 4) For the momentum-flux ratio values considered, increasing the aspect ratio of slanted slots reduces jet penetration to the center and enhances mixing along the walls.5) For eight 4:1 aspect ratio slot orifices at] = 52, increasing the angle of the slots with respect to the mainstream reduces jet penetration while not markedly affecting the mixture uniformity one duct radius from the orifice leading edge. 6) The near optimum mixing modules identified in this study were based on a fixed number of orifices and limited variations in orifice angle and aspect ratio.Further investigation is needed to identify optimum mixing conditions when the number of orifices, orifice aspect ratio , and angle are varied over a larger parameter space."Fearn , R ., and Weston.R .P., " Vorticity Associated with a Jet in a Cross Flow," A / AA Journal.Vol. 12 , No . 12, 1974No . 12, , pp. 1666No . 12, , 1667. . 1 . (All modules are presented in Ref. 7.) While the leading edge of each orifice was fixed at the same axial location (z/R = 0.0), the axial extent of jet mass addition varied according to orifice size and, in the case of the slots, ~-1~----'--~ Heq ::illli: • -• Jet, B ig. 3 Schematic of the test facility.DR 1.26 Module 1 - Baselioe Geometry (Holes) Three baseline geometrics were tested as part of the parametric experiments.Figures 4 and 5 present the mixture fraction variations between planes z/R = 0.0 to z/R = LO for the momentum-flux ratio range endpoin~s: J = 25, and 80 (cases J25MOD1 and /80MOD1).The actual J is shown in the figure caption.A comparison of the mixture fraction distribution at the first axial location (z/R = 0.0) shows a decrease inf at the center, with increasing momentum-flux ratio.For J = 25 (J25MOD1), f is in the range of 0.8-0.9 at the core of the module, indicating the penetration of some jet fluid to the center.For J = 80 (/80MOD1), the mixture fraction values at the center are in the range 0.2-0.3.These f values are at or below the totally mixed value off Cfcqu;i = 0.31) indicating overpenetration lo the center. Figure 9 compares the mixture uniformity parame te r for the 8: I. aspect ratio geome tries.At the first axial location.the J = 25 modul e (J25MOD2) produces degraded mixing due Fig. 18 Fig. 18 Effect of slot angle on mixture uniformity. other rights are reserved by the copyright owner. Table l ( Co11ti11ued) Summary of selected jet mixing studies Ta ble 2 Axial location of orifice trailing edge and orifice blockage Fig. 1 Mixing module dimensions.1.5 featured house
4,286.8
1995-01-01T00:00:00.000
[ "Engineering" ]
A Dynamic Model of Multiple Time-Delay Interactions between the Virus-Infected Cells and Body’s Immune System with Autoimmune Diseases The immune system is a complex interconnected network consisting of many parts including organs, tissues, cells, molecules and proteins that work together to protect the body from illness when germs enter the body. An autoimmune disease is a disease in which the body’s immune system attacks healthy cells. It is known that when the immune system is working properly, it can clearly recognize and kill the abnormal cells and virus-infected cells. But when it doesn’t work properly, the human body will not be able to recognize the virus-infected cells and, therefore, it can attack the body’s healthy cells when there is no invader or does not stop an attack after the invader has been killed, resulting in autoimmune disease.; This paper presents a mathematical modeling of the virus-infected development in the body’s immune system considering the multiple time-delay interactions between the immune cells and virus-infected cells with autoimmune disease. The proposed model aims to determine the dynamic progression of virus-infected cell growth in the immune system. The patterns of how the virus-infected cells spread and the development of the body’s immune cells with respect to time delays will be derived in the form of a system of delay partial differential equations. The model can be used to determine whether the virus-infected free state can be reached or not as time progresses. It also can be used to predict the number of the body’s immune cells at any given time. Several numerical examples are discussed to illustrate the proposed model. The model can provide a real understanding of the transmission dynamics and other significant factors of the virus-infected disease and the body’s immune system subject to the time delay, including approaches to reduce the growth rate of virus-infected cell and the autoimmune disease as well as to enhance the immune effector cells. Introduction Human beings are constantly exposed to germs such as bacteria, viruses and toxins (chemicals produced by microbes) that enter into the human body that make-up the infections and diseases that will eventually make people sick. The body is made up of many types of cells. Usually, cells grow and divide to produce new cells. A body's well-working immune system can prevent germs from entering the body and destroys any infectious microorganisms that do invade the body [1][2][3]. As long as our immune system is working smoothly, we often do not pay much attention to it or even do not know that it is there. However, if it stops working properly because it is weak or cannot fight particularly germs or the diseases, then we become sick. The germs that our body has never encountered before are also likely to make us sick [4]. Some germs will only make you ill the first time you come into contact with them. When the body senses danger from a virus or infection, the immune system will respond and attack it. The human immune system is complex and it is the body's defense system. It is a complex network consisting of many parts including cells, tissues, molecules and organs working together to defend the body against invaders as well as to fight the infections and diseases when germs enter our body [1,2,5]. The skin is also a part of the immune system that prevents germs from entering the body [4]. Our immune system, believe it or not, works very hard to keep us healthy. The main tasks of the body's immune system are to attack and destroy substances that are foreign to our body, such as bacteria and viruses, or limit the extent of their harm if they get in [5]. When our immune system is working properly, it can recognize which cells are ours and which substances are foreign to our body. It then activates, mobilizes, attacks and kills foreign invader germs that can cause us harm. In fact, our immune system learns about any germs after we have been exposed to them. Our body develops antibodies to protect us from those specific germs [1,6]. When we are given a vaccine for example, our immune system builds up antibodies to the foreign cells in the vaccine and will quickly remember these foreign cells and destroy them if we are exposed to them in the future. However, when our immune system is not working properly, the body attacks normal and healthy cells when there is no invader or does not stop an attack after the invader has been killed, resulting in autoimmune disease [1][2][3]5]. Developing mathematical models to predict the growth of tumors, virus-infected cells and immune cells have been of interest in the area of cancer epidemiology research [7][8][9][10] and infectious disease epidemiology [11,12] in the past few decades. Many models [9,10,[13][14][15][16] have been proposed using the ordinary differential equations and partial differential equations in the past several decades and using the delay partial differential equations in recent years for characterizing tumor-immune dynamic growth, but there is still no consensus on the modeling due to the complexity of virus-infected and tumor cancer growth in the body's immune system and the growth patterns of the tumors and virus-infected cells [16]. Many researchers [7,[17][18][19][20][21][22][23][24] have used the existing prey-predator modeling concept [25,26] to study and model the tumor-immune interactions [7,27,28] and the effects of tumor growth [17,29,30]. To simplify an understanding of the interaction between tumor and immune cells, several researchers used the concept of the prey-predator system [24,29]. Here, the immune cells play the role of the predator, while tumor or virus-infected cells of the prey. In other words, the predator is the immune system that kills the tumor cells (prey) [24]. Haque et al. [54] analyzed a predator-prey model using standard disease incidence. Naji and Mustafa [56] studied a dynamic model of eco-epidemiology considering nonlinear disease incidence rates with an infective type of disease in prey. Mukhopadhyaya and Bhattacharyya [36] studied the effect of delay on a prey-predator model with disease in the prey considering a Holling type II functional response. Wang et al. [43] studied a predator-prey model with distributed delays. Huang et al. [22] recently studied a stochastic predator-prey model with a Holling II increasing function in the predator and discussed the analytic results of the dynamics of the stochastic predator-prey model. Jana and Kar [23] studied a three-dimensional epidemiological dynamic model incorporating time delay in the model for considering it as the time taken by a susceptible prey to become infected. Lestari et al. [29] discussed an epidemic model of cancer with chemotherapy in the form of a system of non-linear differential equations with three sub-populations. They presented the point of equilibrium and numerically determined the reproduction number and the growth rate of cancer cells. Pham [63] studied a model to estimate the number of deaths related to COVID-19 based on the US data and recently, Pham [64] studied a mathematical model that considers the time-dependent effects of various pandemic restrictions and changes related to COVID-19 such as reopening states, social distancing, reopening schools and face mask mandates in communities. In this paper, we develop a new mathematical model considering the multiple time-delay interactions between the immune cells and virus-infected cells with an autoimmune disease in the form of delay partial differential equations. The model can be used to determine the dynamic progression of the virus-infected cell growth and observe the patterns of how the virus-infected cells spread in the body's immune system with respect to time delays. In Section 2, we discuss all the model assumptions and the mathematical time-delay virus-immune model development of the body's immune system considering the multiple time-delay interactions between the immune cells (or effector cells) and virus-infected cells with an autoimmune disease. The model aims to predict the dynamic progression of virus-infected cell growth in the immune system. Section 3 discusses several numerical examples to illustrate the proposed model and shows numerical results with various cases whether a virus-infected free state can be reached or not as the time progresses. Section 4 discusses a brief conclusion and future research problems. A Mathematical Model with Multiple Time-Delay Interactions between Infected-Virus and Immune Effector Cells As mentioned earlier, many researchers [7,17,22,28] have developed various prey-predator models and recently developed mathematical models to investigate the interactions between the tumor cells and immune systems, and tumor-immune cells with consideration of an interaction between the tumor and immune cells with a time delay. In this section, we discuss a new virus-immune time-delay model of the body's immune system with considerations of the multiple interactions between the virus-infected cells and body's immune cells with an autoimmune disease. With the same concept of the prey-predator models in the literature, here, in this new model, the immune effector cells play the role of the predator while the virus-infected cells play prey. The effector cell, usually used to describe cells in the immune system, is a cell that performs a specific function in response to a stimulus or defends the body in an immune response. We first describe a list of our modeling assumptions, also based on a recent study by Lestari et al. [29], and then present a derivation of the mathematical modeling results as follows. Notation: We use the following notation throughout the paper: a = the intrinsic growth rate per unit time b = the elimination rate of the virus-infected cells by the healthy immune system Immune Cell Model Formulation In apopulation of healthy immune-cell or effector cells (in this case as the predator), we assume the following: 1. The effector cell has a constant growth rate, s, of effector cells [29]. 2. The effector cell has a natural death rate, c, of effector cells [29]. There is an increase in the number of effector cells by the growthrate d with a maximum degree of recruitment of immune-effector cells in response to the shift toward virus-infected cells [29] with a 3  time delay. 3. There is a constant rate f of the immune system attacking the body's own healthy (effector) cells, resulting in an autoimmune disease. The constant f, in general, will be very small compared to c, so that when I is not too large, then the term f I 2 will be negligible compared to cI. 4. There will be a reduction in the number of effector cells due to their interaction with the virus-infected cells witha constant rate m [29]. We can derive a mathematical equation based on the assumptions (1-3) and the result is as follows: We can derive a mathematical equation based on the assumptions (4-5) and the result is as follows: (1) and (2), a model of the rate of the immune-effector cells governing the interactions between the virus-infected and virus-infected cells over time can be presented as follows: Virus-Infected Cell Model Formulation In a population of virus-infected cells (in this case as prey), which is when a virus infects a host, a virus invades the healthy immune cells of its host and also can infect other cells, we assume the following: 5. The virus-infected cell has a constant growth rate, a, [29] with consideration of a constant factor of growth rate, g, and a 1  time delay before the virus is to be infected. 6. There will be a constant elimination rate of the virus-infected cells by the healthy immune system (effector cells), b, by a 2  time delay. In other word, b measures how efficiently the effector cells kill the virus-infected cells. 7. The number of virus-infected cells will decline by a constant parameter of the virus-infected cleanup of effector cells, p, [29] with a 3  time delay. 8. There will be a reduction in the number of virus-infected cells by a constant rate e that encounters of the two virus-infected cells per unit of time in competing with each other due to the limited number of host cells. The constant rate ehere can be considered to be very small. We can derive a mathematical equation based on the assumptions (6-7) and the result is as follows: Here, the constant parameter b measures how efficiency effector cells kill virus-infected cells. From assumptions (8-9), we can derive a mathematical equation and the result is as follows: From Equations (4) and (5), a model of the rate of the virus-infected cells overtime can be presented as follows: Thus, from Equations (3) and (6), a new virus-immune time-delay model for the body's immune system with considerations of multiple interactions between the virus infected cells and body's immune cells with autoimmune disease is given as follows: If we do not consider the effect of the chemotherapy drug from the model studied by Lestari et al. [29], then their model [29] can be slightly considered as a special case of our model, as given in equation (7), where f = 0, e = 0, g = 0, τ1 = 0, τ2 = 0 and τ3 = 0. We now wish to determine the number of immune-infector cells I(t) and virus-infected cells V(t) at any given time. We developed a program using R software to calculate and plot the two functions I(t) and V(t) with respect to time t, as will be discussed in the next section. Model Analysis In this section, we present an analysis of the proposed model. Table 1 shows the parameter values that we use in our analysis based on some existing studies [29,[39][40][41] for the illustration of our model. Any other sets of parameter values can be easily applied from the model. In this study, we consider various initial numbers of virus-infected cells and numbers of immune-effect cells from 15,000 to 30,000 and from 50,000 to 75,000, respectively, to explore if the results depend on those initial numbers of cells. We discuss below several cases based on various parameter values of the virus-infected growth rates, a, the elimination rate of the virus-infected cells by the immune-effector cells, b and the growth rate of the immune-effector cells, s, as follows: Case 1: When a=0.43, b=43 × 10 −7 , s= 7000. We first assume that the initial number of virus-infected cells is V0 = 30,000 and the initial number of immune-effector cells is I0 = 50,000. From Figure 1a,b, we can observe that the initial number of virus-infected cells and immune-effector cells are 30,000 and 50,000, respectively, as expected.The virus-infected counts begins to increase and it reaches the highest point at around the 14th day as (V,I) = (72,248, 81,228) and starts to decrease slowly, where (V,I) = (31,905, 90,578), at the 300th day. As seen in the graphs in Figure 1a, on the one hand, the number of immune-effector cells keeps increasing but starts to slowly stabilize after the 100th day at the level of 90,578.On the other hand, the number of virus-infected cells first begins to increase until it reaches the maximum number of infected cells at 72,304 (see Figure 1b) then startsto decrease and slowly stabilize after around the 280th day and stays at just above the level of the initial number of virus-infected cells, at 31,900 cells. It seems that in this case, with a given growth rate of effector cells s = 7000 cells per day and avirus-infected growth rate a = 0.43, it will not be able to reach the virus free state. Figure 1c,d show the relationship between the immune-effector cells and the virus-infected cells. Figure 1e,f show the 3D relationships of the effector cells, the immune-effector cells and time. We observe the same results above even when the initial number of immune effector cells is I0 = 75,000 (see Figure 1g,h) as well as the same results when the initial number of virus-infected cells is reduced toV0 = 15,000 (see Figure 1i,j), respectively. It is worth noting that the initial number of virus-infected cells and immune-effector cells do not influence the end result of whether the body is of virus free stage or not. This shows that our model can be used to obtain the results without needing to know the exact initial number of virus-infected cells or the number of immune effector cells in the body. Let us consider when I0 = 75,000 and V0 = 15,000: From Figure 1k,l, we observe that the virus-infected counts keep increasing significantly from the beginning until around the 50th day as (V,I) = (31,291, 90,521) and slowly stabilizes around the 100th day at the level of 31,770, while the number of immune-effector cells also keeps increasing but starts to slowly stabilize after the 60th day at the level of 90,560. In this case (I0 = 75,000 and V0 = 15,000), the result is the same as all the cases above that, the body will not be able to reach the virus free state. This concludes that the initial number of virus-infected cells and immune-effector cells do not influence the end results. Model comparison: We now use the model studied by Lestari et al. [29], as we mentioned earlier, to compare their modeling result (i.e., without the effect of the chemotherapy drug and when f = 0, e = 0, g = 0, τ1 = 0, τ2 = 0 and τ3 = 0) to our model from Equation (7). From Figure 1m, we observe that the number of immune-effector cells, I, keeps increasing but starts to slowly stabilize after the 150th day at the level of 169,669 cells. The virus-infected count, V, (see Figure 1n) begins to increase and it reaches the highest point at around the 14th day with (V,I) = (107,189, 102,683) but starts to decrease sharply until it reaches the virus free state as (V,I) = (0, 168,064) after the 100th day with the number of immune-effector cells around 168,064 cells. In this example, when those values, e, f and g, are not equal to zero, our proposed model shows that one cannot reach the virus free state because we consider the autoimmune disease factor in our model, where one can reach the virus free state at the 100th day by using the model developed by Lestari et al. [29], since they did not consider the autoimmune disease factor in their study. Case 2: This is the same as Case 1, except s= 10,000 (instead of s = 7000). From Figure 2a,b, we can observe that the initial number of virus-infected cells and immune-effector cells are 30,000 and 50,000, respectively, as expected. It should be noted that the number of immune-effector cells (see Figure 2a) keeps increasing but starts to slowly stabilize after the 250th day at the level of 114,790. The virus-infected counts (see The result is about the same even when the initial number of immune effector cells to be as I0 = 75,000, see Figure 2g,h. The result is also about the same, even when the initial number of virus-infected cells is reduced to V0 = 15,000 cells (see Figure 2i,j). It is worth noting that the initial number of virus-infected cells and immune-effector cells do not influence the end result. Case 3: This is the same as Case 1 (i.e., b=43 × 10 −7 , s= 7000), except a=0.043. From Figure 3a,b, we can observe that the initial number of virus-infected cells and immune-effector cells are 30,000 and 50,000, respectively, as expected. It should be noted that the number of immune-effector cells (see Figure 3a) keeps increasing but starts to slowly stabilize after the 50th day at the level of 90,310, where the virus-infected count (see Figure 3b) starts to decrease sharply until it reaches the virus free stateat (V,I) = (0, 90,310) after the 50th day. The result is about the same, even when the initial number of immune effector cells is I0 = 75,000, see Figure 3c,d. The result is also about the same, even when the initial number of virus-infected cells is reduced to V0 = 15,000 (see Figure 3e,f). It is worth noting that the initial number of virus-infected cells and immune-effector cells do not influence the end result. From Figure 4a,b, we can observe that the initial number of virus-infected cells and immune-effector cells are 30,000 and 50,000, respectively. It should be noted that the number of immune-effector cells (see Figure 4a) keeps increasing but starts to slowly stabilize after the 40th day at the level of 114,416 where the virus-infected count (see Figure 4b) starts to decrease significantly until it reaches the virus free state at (V,I) = (0, 114,416) after the 40th day. The result is about the same, even when the initial number of immune effector cells is I0 = 75,000, see Figure 4c,d. The result is also about the same, even when the initial number of virus-infected cells is reduced toV0 = 15,000 (see Figure 4e,f). It is worth noting that the initial number of virus-infected cells and immune-effector cells do not influence the end result. Table 2 below shows the parameter values that we will use to analyze here, the same as Case 1 except the value b is 4.3 × 10 −4 . Any other sets of parameter values can be easily applied from the model. From Figure 5a,b, we can observe that the initial number of virus-infected cells and immune-effector cells are 30,000 and 50,000, respectively. It should be noted that the number of immune-effector cells (see Figure 5a) keeps increasing but starts to slowly stabilize after the 30th day at the level of 89,920, where the virus-infected count (see Figure 5b) starts to decrease significantly right after the first day and it quickly reaches the virus free state at (V,I) = (0, 60,467) after the third day. The result is about the same, even when the initial number of immune effector cells is I0 = 75,000, see Figure 5c,d. The result is also about the same, even when the initial number of virus-infected cells is reduced to V0 = 15,000 (see Figure 5e,f). It is worth noting that the initial number of virus-infected cells and immune-effector cells do not influence the end result. Case 6: Same as Case 5, except s =10,000 cells/day. From Figure 6a,b, we can observe that the initial number of virus-infected cells and immune-effector cells are 30,000 and 50,000, respectively. It should be noted that the number of immune-effector cells (see Figure 6a) keeps increasing but starts to slowly stabilize after the 30th day at the level of 113,310,where the virus-infected counts (see Figure 6b) starts to decrease significantly right after the firstday and it quickly reaches the virus free state at (V,I) = (0, 68,680) after the thirdday. The result is about the same, even when the initial number of immune effector cells is as I0 = 75,000, see Figure 6c,d. The result is also about the same, even when the initial number of virus-infected cells is reduced to V0 = 15,000 (see Figure 6e,f). It is worth noting that the initial number of virus-infected cells and immune-effector cells do not influence the end result. Conclusions This paper discusses a mathematical model of the body's immune system, considering the multiple time-delay interactions between the immune cells and virus-infected cells with an autoimmune disease using the delay partial differential equations. The model can be used to determine the dynamic progression of virus-infected cell growth and observe the patterns of how the virus-infected cells spread in the body's immune system with respect to time delays. The model can be used to predict when the virus-infected free state can be reached as the time progresses as well as the number of body's immune cells as any given time. From the numerical examples, we observe that the initial number of virus-infected cells and immune-effector cells that are needed to obtain the solutions of the delay partial differential equations do not influence the end results. We plan to broaden our model in a near future by considering the chemotherapy drug treatment subject to the time delays.
5,559.4
2021-09-07T00:00:00.000
[ "Medicine", "Engineering" ]
A Refined End-to-End Discourse Parser The CoNLL-2015 shared task focuses on shallow discourse parsing, which takes a piece of newswire text as input and returns the discourse relations in a PDTB style. In this paper, we describe our discourse parser that participated in the shared task. We use 9 components to construct the whole parser to identify discourse connectives, label arguments and classify the sense of Explicit or Non-Explicit relations in free texts. Compared to previous discourse parser, new components and features are added in our system, which further improves the overall performance of the discourse parser. Our parser ranks the first on two test datasets, i.e., PDTB Section 23 and a blind test dataset. Introduction An end-to-end discourse parser is given free texts as input and returns discourse relations in a PDTB style, where a connective acts as a predicate that takes two text spans as its arguments. It can benefit many downstream NLP applications, such as information retrieval, question answering and automatic summarization, etc. The extraction of exact argument spans and Non-Explicit sense identification have been shown to be the main challenges of the discourse parsing (Lin et al., 2014). Since the release of Penn Discourse Treebank (PDTB) (Prasad et al., 2008), much research has been carried out on PDTB to perform the subtasks of a full end-to-end parser, such as identifying discourse connectives, labeling arguments and classi-fying Explicit or Implicit relations. To identify discourse connectives from non-discourse ones and to classify the Explicit relations, ) extracted syntactic features of connectives from the constituent parses, and showed that syntactic features improved performance in both subtasks. For the argument labeling subtask, (Ghosh et al., 2011) regarded it as a token-level sequence labeling task using conditional random fields (CRFs). (Lin et al., 2014) proposed a tree subtraction algorithm to extract the arguments. (Kong et al., 2014) adopted a constituent-based approach to label arguments. As for Implicit sense classification, , (Lin et al., 2009) and (Rutherford and Xue, 2014) performed the classification using several linguistically-informed features, such as verb classes, production rules and Brown cluster pair. (Lan et al., 2013) presented a multi-task learning framework with the use of the prediction of explicit discourse connective as auxiliary learning tasks to improve the performance. All of these research focus on the subtasks of the PDTB, and can be viewed as isolated components of a full parser. (Lin et al., 2014) constructed a full parser on the top of these subtasks, which contained multiple components joined in a sequential pipeline architecture including a connective classifier, argument labeler, explicit classifier, non-explicit classifier, and attribution span labeler. In this paper, we followed the framework of (Lin et al., 2014) to construct a discourse parser. However, our work differs from that of Lin's in that our system introduces new components and features to improve the overall performance. Specifically, (1) we build two different extractors for Arg1 and Arg2 respectively for labeling Explicit arguments in the case of PS (i.e., Arg1 is located in some previous sentences of the connective); (2) we add new features to capture more information for classification or recognition; (3) we build two different argument extractors for Non-EntRel relations in Non-Explicit; (4) we use the refined arguments to improve the Non-Explicit sense classification. The organization of this work is as follows. Section 2 gives a sketch description of our parser in a flow chart and the function of every component in this architecture. Section 3 describes the components and features in detail. Section 4 reports the preliminary experimental results on the training and development dataset, and the final results on two test datasets are shown in Section 5. Section 6 concludes this work. We design the discourse parser as a sequential pipeline, shown in Figure 1, and the 9 components of our parser are listed as follows. System Overview First, for texts with Explicit connective words: (1) Connective Classifier is to identify the discourse connectives from non-discourse ones. (2) Arg1 Position Classifier is to decide the relative position of Arg1 -whether it is located within the same sentence as the connective (SS) or in some previous sentence of the connective (PS). (3) SS Arguments Extractor is to extract the spans of Arg1 and Arg2 in the SS case. In the PS case, we build two extractors to identify the text spans for PS Arg1 and PS Arg2 respectively. (5) PS Arg2 Extractor is to extract Arg2 for PS. (6) Explicit Sense Classifier is to identify the sense that this Explicit connective conveys. Second, for all adjacent sentence pairs within each paragraph, but not identified in any Explicit relation: (7) Non-Explicit Sense Classifier is to classify the sense of each sentence pair into one of the Non-Explicit relation senses. Since attribution is not annotated for EntRel relations, if the output of the above Non-Explict sense classifier is EntRel, we regard the previous sentence as Arg1 and the next one as Arg2. Otherwise, we build the following two argument extractors to label Arg1 and Arg2. Components and Features Generally, our parser consists of 9 components, which compose an Explicit parser and a Non-Explicit parser. Most of features used in our parser are borrowed from previous work (Kong et al., 2014;Lin et al., 2014;Rutherford and Xue, 2014). Connective Classifier Since the input of the discourse parser is free text, the first thing we need to do is to identify all connective occurrences in text, and then to use the connective classifier to decide whether they function as discourse connectives or not. For each connective occurrence C, we extract features from its context, part-of-speech (POS) and the parse tree of the connective's sentence. Note that prev 1 and next 1 indicate the first previous word and the first next word of connective C respectively. For a node in the parse tree, we use the POS combinations of the node, its parent, its children to represent the linked context. The features we used for connective classsification consist of the following: (1) Pitler's: C string (case-sensitive), self-category (the highest node in the parse tree that covers only the connective words), parent-category (the parent of the selfcategory), left-sibling-category (the left sibling of the self-category), right-sibling-category (the right sibling of the self-category), C-Syn interaction (the pairwise interaction features between the connective C and each category feature (i.e., self-category, parent-category, left-sibling-category, right-siblingcategory)) , Syn-Syn interaction (the interaction features between pairs of category features); (2) Lin's: C POS, prev 1 + C string, prev 1 POS, prev 1 POS + C POS, C string + next 1 , next 1 POS, C POS + next 1 POS, path of C's parent → root, compressed path of C's parent → root; (3) our new proposed features: the POS tags of nodes from C's parent → root, parent-category linked context, right-siblingcategory linked context. Our three new features are considered to capture more syntactic context information of the connective C for connective classification. Arg1 Position Classifier After identifying the discourse connectives from the texts, we come to locate the positions of Arg1 and Arg2 of the connective C. Since Arg2 is defined as the argument with which the connective is syntactically associated, its position is fixed once we locate the discourse connective C. So we only need to identify the relative position of Arg1 as whether it is located within the same sentence as the connective (SS) or in some previous sentences of the connective (PS). We do not identify the case which Arg2 is located in some sentences following the sentence containing the connective (FS), because the statistical distribution of (Prasad et al., 2008) shows that less than 0.1% are FS for Explicit relations. The features consist of the following: (1) Lin's: C string, C position (the position of connective C in the sentence: start, middle, or end), C POS, prev 1 , prev 1 POS, prev 1 + C, prev 1 POS + C POS, prev 2 , prev 2 POS, prev 2 + C, prev 2 POS + C POS; (2) our newly-proposed features: C POS + next 1 POS, next 2 , path of C → root. Note that prev 2 and next 2 indicate the second previous word and the second next word of connective C, respectively. Argument Extractor After the relative position of Arg1 is classified as SS or PS in previous component, the argument extractor is to extract the spans of Arg1 and Arg2 for the identified discourse connectives. According to (Kong et al., 2014), Kong's constituent-based approach outperforms Lin's tree subtraction algorithm for the Explicit arguments extraction. However, Lin only focused on the SS case, and Kong treated the immediately preceding sentence as a special constituent for PS, which means that they just viewed the immediately preceding sentence as Arg1 and only extracted Arg2 for PS. So we only follow Kong's constituent-based approach to extract Arg1 and Arg2 for SS. However, for PS, we build two different extractors for Arg1 and Arg2 separately. Our intuition is that the two arguments have different syntactic and discourse properties and a unified model with the same feature set used for both may not have enough discriminating power. SS Arguments Extractor: In the case of SS, we adopt (Kong et al., 2014)'s constituent-based approach without Joint Inference to extract Arg1 and Arg2. For PS, we build two argument extractors for Arg1 and Arg2, respectively, as follows. PS Arg1 Extractor: We consider the immediately previous sentence of connective C as the text span where Arg1 occurs and then build a extractor to label the Arg1 in it. Similar to Lin's Attribution span labeler, this extractor consists of two steps: splitting the sentence into clauses, and deciding, for each clause, whether it belongs to Arg1 or not. First we use nine punctuation symbols (...,.:;?!-∼) to split the sentence into several parts and use the SBAR tag in its parse tree to split each part into clauses. Second, we build a classifier to decide each clause whether it belongs to Arg1 or not. On the one hand, the attribution relation is annotated in PDTB, which expresses the "ownership" relationship between abstract objects and individuals or agents. However, the attribution annotation is excluded in CoNLL-2015 (Xue et al., 2015). Therefore we borrow several attribution features from (Lin et al., 2014) in order to distinguish the attributionrelated span from others. On the other hand, according to the minimality principle of PDTB, the argu-ment annotation includes the minimal span of text that is sufficient for the interpretation of the relation. Since connectives have very close relationship with discourse relation, we consider to adopt connectiverelated features to capture text span for relation. We choose the following features: (1) attribution-related features from (Lin et al., 2014): lemmatized verbs in curr, the first term of curr, the last term of curr, the last term of prev + the first term of curr, and (2) our proposed connective-related features: lowercased C string and C category (the syntactic category of the connective: subordinating, coordinating, or discourse adverbial), where curr and prev indicate the current and previous clause respectively and the corresponding category for the connective C is obtained from the list provided in (Knott, 1996). PS Arg2 Extractor: The PS Arg2 Extractor is similar to the PS Arg1 Extractor. However, they differ as follows: (1) in the first step, we consider the sentence containing connective C as the text span where Arg2 occurs and besides the previous nine punctuation symbols, we also use the connective C to split the sentence; (2) we adopt different features to build classifier: lowercased verbs in curr, lemmatized verbs in curr, the first term of curr, the last term of curr, the last term of prev, the first term of next, the last term of prev + the first term of curr, the last term of curr + the first term of next, production rules extracted from curr, curr position (i.e., the position of curr in the sentence: start, middle or end), C string, lowercased C string, C position, C category, path of C's parent → root, compressed path of C's parent → root. Explicit Sense Classifier From previous components, we have identified all discourse connectives and their arguments from the texts. Here, we move to decide what Explicit relation each of them conveys. (3) our five newly proposed features: parentcategory linked context, previous connective and its POS of as and previous connective and its POS of when. The first parent-category linked context fea-ture is to provide more syntactic context information for the classification. The last four features are specially designed to disambiguate the relation senses of the connective as or when, since the two connectives often have ambiguity between Contingency.Cause.Reason and Temporal.Synchrony. As shown in Example 1, the previous connective of the discourse connective as is But, therefore the discourse connective as usually carries the Contingency.Cause.Reason sense rather than Temporal.Synchrony. (1) But the gains in Treasury bonds were pared as stocks staged a partial recovery. Non-Explicit Parser In this section, we discuss the identification of the Non-Explicit relations. Since the Non-Explicit relations are only annotated for adjacent sentence pairs within paragraphs, we first collect all adjacent sentence pairs within each paragraph, but not identified in any Explicit relation. We assume the previous sentence as Arg1 and the next sentence as Arg2, and then identify the sense by the features extracted from (Arg1, Arg2). After that, we use Implicit Arg1 Extractor and Implicit Arg2 Extractor to label Arg1 and Arg2 for Non-EntRel relations in Non-Explicit, and for En-tRel relations, we simply label the previous sentence as Arg1 and the next as Arg2. Moreover, as shown in Figure 1, we use the Non-Explicit sense classifier again to identify the sense on the refined arguments (extracted arguments from Implicit Arg1&Arg2 Extractor) rather than the adjacent sentence pairs (i.e., previous sentence as Arg1, the next sentence as Arg2 ). Our expectation is that the overall parser performance might be improved if we extract features on refined argument spans rather than original argument spans. Non-Explicit Sense Classifier According to previous work, this component is the most difficult one in the discourse parser. And the features we adopted in this component are chosen from (Lin et al., 2009;Rutherford and Xue, 2014), including: production rules, dependency rules, first-last, first3, modality, verbs, Inquirer, polarity, immediately preceding discourse connective of current sentence pair, Brown cluster pairs. For the collection of production rules, dependency rules, and Brown cluster pairs, we used a frequency cutoff of 5 to remove infrequent features, and for Brown cluster, we choose 3,200 classes, as in (Rutherford and Xue, 2014). Implicit Arg1 Extractor The implicit Arg1 Extractor is performed to extract Arg1 for Non-EntRel relations in Non-Explicit, which is done similarly to the PS Arg1 Extractor. We first split the sentence into clauses and then decide each clause whether it belongs to Arg1 or not. The features extracted from the current and previous clauses (curr and prev) are: the first term of curr, the last term of prev, the cross product of the prev and curr production rules, the path of the first term of curr → the last term of prev, number of words of curr. Implicit Arg2 Extractor The implicit Arg2 Extractor is similar to that of Arg1, but different features are extracted from the current, previous, and next clauses (curr, prev, and next), including: lowercased verbs in curr, the first term of curr, the last term of prev, the last term of prev + the first term of curr, the last term of curr + the first term of next, curr position, the cross product of the prev and curr production rules, the cross product of the curr and next production rules, the path of the first term of curr → the last term of prev, number of words of curr. Experiments on Training Data To implement the 9 components described above, we compared two supervised machine learning algorithms, i.e., MaxEnt and Naive Bayes, implemented in MALLET toolkit 1 . For each component, we chose the algorithm with better performance. Specifically, we use Naive Bayes to build Non-Explicit Sense Classifier, and MaxEnt for the other 8 components. We use PDTB Section 02-21 for training and Section 22 for development, which are provided by CoNLL-2015 with parse trees along with POS tags 1 mallet.cs.umass.edu produced by the Berkeley Parser. And we participate in the closed tracks, that is, only two resources (i.e., Brown Clusters and MPQA Subjectivity Lexicon) are used in our discourse parser. According to the requirement, a relation is considered to be correct if and only if: (1) the discourse connective is correctly detected (for explicit discourse relations); (2) the sense of a discourse relation is correctly predicted; (3) the text spans of the two arguments as well as their labels (Arg1 and Arg2) are correctly predicted. We use the official measure F 1 (harmonic mean of Precision and Recall) to evaluate performance. Table 1 reports the results of the explicit discourse parser on development data set of three components (i.e., Connective classifier, Arg1 position classifier and Explicit sense classifier) without error propagation (EP), where our new features are introduced. We find that the F 1 scores of all these classifiers are increased by adding our new features (+new). To evaluate the performance of Explicit arguments extraction, we build the PS baseline by labeling the previous sentence of the connective as Arg1, and the text span between the connective and the beginning of the next sentence as Arg2. Table 2 summarizes the results of Explicit arguments extraction with exact matching and without error propagation, and the corresponding PS baseline is shown within parentheses. Note that we removed the leading or tailing punctuation from all text spans before evaluation. We see that the F 1 of PS is improved by a large margin for Arg1, Arg2 and Both by using two separate PS argument extractors, and the overall F 1 of Explicit arguments extraction is also increased by 2.51%. Table 4 reports the results for arguments extraction on Non-EntRel relations in Non-Explicit without error propagation, where the first row shows the result of the baseline system by labeling the previous sentence as Arg1 and the next sentence as Arg2, and the second row shows the result when using two Implicit extractors. As we expected, using two separate Implicit extractors achieves much better performance than the baseline. Table 5 reports the comparison results for the overall arguments extraction of parser with error propagation, where the first row indicates the performance when simply using the previous sentence as Arg1 and the next sentence as Arg2 for all Non-Explicit relations, and the second shows the results of using two Implicit argument extractors for Non-EntRel relations. We see that the performance of the arguments extraction increases, but not too much, due to the error propagation from the EntRel identification (P: 39.32%, R: 64.19%, F 1 : 48.76%; EP). Table 6 shows the overall results, where the first row is the overall performance of the parser when identify Non-Explicit sense on original arguments (i.e., adjacent sentence pairs), and the second row is the results on refined arguments. We find that the overall F 1 of the parser is improved 0.41% by extracting features on the refined arguments. Table 6: Results of overall parser performance using Non-Explicit sense classifier on original and refined arguments 5 Results on Test Data Sets Results of Non-Explicit Parser The above described discourse parser system is evaluated on two test datasets provided by the shared task: (1) Section 23 in PDTB; (2) blind test set drawn from a similar source and domain in terms of F 1 . The officially released results are shown in Table 7. Our parser ranks the first on both test datasets. Although the two test datasets are both from news wire domain and in PDTB style, there are difference between the two datasets. For example, not all discourse connectives in blind test dataset are listed in PDTB, e.g., "upon" is annotated as discourse connective in blind test dataset while it is not in PDTB. We compare our discourse parser with Lin's on PDTB Section 23. We find that new features proposed in this work do help increase F 1 of Explicit connective classification by 0.54%. And for the Explicit arguments extraction, our parser achieves better performance as well. However, since the sense labels of Explicit and Non-Explicit relations in CoNLL-2015 differ from Lin's, i.e., Lin used partial sense labels of the second level (Type) by excluding several small categories while CoNLL-2015 Table 7: Results of our parser on PDTB Section 23 and the blind test dataset, Lin's parser on PDTB Section 23 and the 2nd rank parser on blind test dataset, "All" indicates all relations (Explicit and Non-Explicit relations), "-" indicates not available used different sense labels (partial of the three sense levels with excluding and/or merging several small categories), the direct comparison on sense classification as well as the parser performance is not possible. Table 7 also shows the results of our parser and the 2nd rank parser on blind test dataset, we see that our parser achieves better performance, especially on the arguments extraction. Conclusion In this work, we have implemented a refined discourse parser by adding new components and features based on Lin's system. Specifically, we (1) build two PS arguments extractors (i.e., PS Arg1 Extractor and PS Arg2 Extractor ) to improve performance of Explicit arguments extraction, (2) propose new features for building three classifiers (i.e, Connective Classifier, Arg1 Position Classifier, Explicit Sense Classifier), (3) construct two Implicit arguments extractors (i.e., Implicit Arg1 Extractor and Implicit Arg2 Extractor) for Non-EntRel relations, and (4) perform Non-Explicit sense classification on refined arguments. Our system ranks the first on both test data sets, i.e. PDTB Section 23 and a blind test dataset.
5,063.8
2015-07-01T00:00:00.000
[ "Computer Science", "Sociology" ]